Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-building-voice-technology-iot-projects
Packt
08 Nov 2016
10 min read
Save for later

Building Voice Technology on IoT Projects

Packt
08 Nov 2016
10 min read
In this article by Agus Kurniawan, authors of Smart Internet of Things Projects, we will explore how to make your IoT board speak something. Various sound and speech modules will be explored as project journey. (For more resources related to this topic, see here.) We explore the following topics Introduce a speech technology Introduce sound sensor and actuator Introduce pattern recognition for speech technology Review speech and sound modules Build your own voice commands for IoT projects Make your IoT board speak Make Raspberry Pi speak Introduce a speech technology Speech is the primary means of communication among people. A speech technology is a technology which is built by speech recognition research. A machine such as a computer can understand what human said even the machine can recognize each speech model so the machine can differentiate each human's speech. A speech technology covers speech-to-text and text-to-speech topics. Some researchers already define several speech model for some languages, for instance, English, German, China, French. A general of speech research topics can be seen in the following figure: To convert speech to text, we should understand about speech recognition. Otherwise, if we want to generate speech sounds from text, we should learn about speech synthesis. This article doesn't cover about speech recognition and speech synthesis in heavy mathematics and statistics approach. I recommend you read textbook related to those topics. In this article, we will learn how to work sound and speech processing on IoT platform environment. Introduce sound sensors and actuators Sound sources can come from human, animal, car, and etc. To process sound data, we should capture the sound source from physical to digital form. This happens if we use sensor devices which capture the physical sound source. A simple sound sensor is microphone. This sensor can record any source via microphone. We use a microphone module which is connected to your IoT board, for instance, Arduino and Raspberry Pi. One of them is Electret Microphone Breakout, https://www.sparkfun.com/products/12758. This is a breakout module which exposes three pin outs: AUD, GND, and VCC. You can see it in the following figure. Furthermore, we can generate sound using an actuator. A simple sound actuator is passive buzzer. This component can generate simple sounds with limited frequency. You can generate sound by sending on signal pin through analog output or PWM pin. Some manufacturers also provide a breakout module for buzzer. Buzzer actuator form is shown in the following figure. Buzzer usually is passive actuator. If you want to work with active sound actuator, you can use a speaker. This component is easy to find on your local or online store. I also found it on Sparkfun, https://www.sparkfun.com/products/11089 which you can see it in the following figure. To get experiences how to work sound sensor/actuator, we build a demo to capture sound source by getting sound intensity. In this demo, I show how to detect a sound intensity level using sound sensor, an Electret microphone. The sound source can come from sounds of voice, claps, door knocks or any sounds loud enough to be picked up by a sensor device. The output of sensor device is analog value so MCU should convert it via a microcontroller's analog-to-digital converter. The following is a list of peripheral for our demo. Arduino board. Resistor 330 Ohm. Electret Microphone Breakout, https://www.sparkfun.com/products/12758. 10 Segment LED Bar Graph - Red, https://www.sparkfun.com/products/9935. You can use any color for LED bar. You can also use Adafruit Electret Microphone Breakout to be attached into Arduino board. You can review it on https://www.adafruit.com/product/1063. To build our demo, you wire those components as follows Connect Electret Microphone AUD pin to Arduino A0 pin Connect Electret Microphone GND pin to Arduino GND pin Connect Electret Microphone VCC pin to Arduino 3.3V pin Connect 10 Segment LED Bar Graph pins to Arduino digital pins: 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 which already connected to resistor 330 Ohm You can see the final wiring of our demo in the following figure: 10 segment led bar graph module is used to represent of sound intensity level. In Arduino we can use analogRead() to read analog input from external sensor. Output of analogRead() returns value 0 - 1023. Total output in voltage is 3.3V because we connect Electret Microphone Breakout with 3.3V on VCC. From this situation, we can set 3.3/10 = 0.33 voltage for each segment led bar. The first segment led bar is connected to Arduino digital pin 3. Now we can implement to build our sketch program to read sound intensity and then convert measurement value into 10 segment led bar graph. To obtain a sound intensity, we try to read sound input from analog input pin. We read it during a certain time, called sample window time, for instance, 250 ms. During that time, we should get the peak value or maximum value of analog input. The peak value will be set as sound intensity value. Let's start to implement our program. Open Arduino IDE and write the following sketch program. // Sample window width in mS (250 mS = 4Hz) const int sampleWindow = 250; unsigned int sound; int led = 13; void setup() { Serial.begin(9600); pinMode(led, OUTPUT); pinMode(3, OUTPUT); pinMode(4, OUTPUT); pinMode(5, OUTPUT); pinMode(6, OUTPUT); pinMode(7, OUTPUT); pinMode(8, OUTPUT); pinMode(9, OUTPUT); pinMode(10, OUTPUT); pinMode(11, OUTPUT); pinMode(12, OUTPUT); } void loop() { unsigned long start= millis(); unsigned int peakToPeak = 0; unsigned int signalMax = 0; unsigned int signalMin = 1024; // collect data for 250 milliseconds while (millis() - start < sampleWindow) { sound = analogRead(0); if (sound < 1024) { if (sound > signalMax) { signalMax = sound; } else if (sound < signalMin) { signalMin = sound; } } } peakToPeak = signalMax - signalMin; double volts = (peakToPeak * 3.3) / 1024; Serial.println(volts); display_bar_led(volts); } void display_bar_led(double volts) { display_bar_led_off(); int index = round(volts/0.33); switch(index){ case 1: digitalWrite(3, HIGH); break; case 2: digitalWrite(3, HIGH); digitalWrite(3, HIGH); break; case 3: digitalWrite(3, HIGH); digitalWrite(4, HIGH); digitalWrite(5, HIGH); break; case 4: digitalWrite(3, HIGH); digitalWrite(4, HIGH); digitalWrite(5, HIGH); digitalWrite(6, HIGH); break; case 5: digitalWrite(3, HIGH); digitalWrite(4, HIGH); digitalWrite(5, HIGH); digitalWrite(6, HIGH); digitalWrite(7, HIGH); break; case 6: digitalWrite(3, HIGH); digitalWrite(4, HIGH); digitalWrite(5, HIGH); digitalWrite(6, HIGH); digitalWrite(7, HIGH); digitalWrite(8, HIGH); break; case 7: digitalWrite(3, HIGH); digitalWrite(4, HIGH); digitalWrite(5, HIGH); digitalWrite(6, HIGH); digitalWrite(7, HIGH); digitalWrite(8, HIGH); digitalWrite(9, HIGH); break; case 8: digitalWrite(3, HIGH); digitalWrite(4, HIGH); digitalWrite(5, HIGH); digitalWrite(6, HIGH); digitalWrite(7, HIGH); digitalWrite(8, HIGH); digitalWrite(9, HIGH); digitalWrite(10, HIGH); break; case 9: digitalWrite(3, HIGH); digitalWrite(4, HIGH); digitalWrite(5, HIGH); digitalWrite(6, HIGH); digitalWrite(7, HIGH); digitalWrite(8, HIGH); digitalWrite(9, HIGH); digitalWrite(10, HIGH); digitalWrite(11, HIGH); break; case 10: digitalWrite(3, HIGH); digitalWrite(4, HIGH); digitalWrite(5, HIGH); digitalWrite(6, HIGH); digitalWrite(7, HIGH); digitalWrite(8, HIGH); digitalWrite(9, HIGH); digitalWrite(10, HIGH); digitalWrite(11, HIGH); digitalWrite(12, HIGH); break; } } void display_bar_led_off() { digitalWrite(3, LOW); digitalWrite(4, LOW); digitalWrite(5, LOW); digitalWrite(6, LOW); digitalWrite(7, LOW); digitalWrite(8, LOW); digitalWrite(9, LOW); digitalWrite(10, LOW); digitalWrite(11, LOW); digitalWrite(12, LOW); } Save this sketch program as ch05_01. Compile and deploy this program into Arduino board. After deployed the program, you can open Serial Plotter tool. You can find this tool from Arduino menu Tools -| Serial Plotter. Set the baud rate as 9600 baud on the Serial Plotter tool. Try to make noise on a sound sensor device. You can see changing values on graphs from Serial Plotter tool. A sample of Serial Plotter can be seen in the following figure: How to work? The idea to obtain a sound intensity is easy. We get a value among sound signal peaks. Firstly, we define a sample width, for instance, 250 ms for 4Hz. // Sample window width in mS (250 mS = 4Hz) const int sampleWindow = 250; unsigned int sound; int led = 13; On the setup() function, we initialize serial port and our 10 segment led bar graph. void setup() { Serial.begin(9600); pinMode(led, OUTPUT); pinMode(3, OUTPUT); pinMode(4, OUTPUT); pinMode(5, OUTPUT); pinMode(6, OUTPUT); pinMode(7, OUTPUT); pinMode(8, OUTPUT); pinMode(9, OUTPUT); pinMode(10, OUTPUT); pinMode(11, OUTPUT); pinMode(12, OUTPUT); } On the loop() function, we perform to calculate a sound intensity related to a sample width. After obtained a peak-to-peak value, we convert it into voltage form. unsigned long start= millis(); unsigned int peakToPeak = 0; unsigned int signalMax = 0; unsigned int signalMin = 1024; // collect data for 250 milliseconds while (millis() - start < sampleWindow) { sound = analogRead(0); if (sound < 1024) { if (sound > signalMax) { signalMax = sound; } else if (sound < signalMin) { signalMin = sound; } } } peakToPeak = signalMax - signalMin; double volts = (peakToPeak * 3.3) / 1024; Then, we show a sound intensity in volt form in serial port and 10 segment led by calling display_bar_led(). Serial.println(volts); display_bar_led(volts); Inside the display_bar_led() function, we turn off all LEDs on 10 segment led bar graph by calling display_bar_led_off() which sends LOW on all LEDs using digitalWrite(). After that, we calculate a range value from volts. This value will be converted as total showing LEDs. display_bar_led_off(); int index = round(volts/0.33); Introduce pattern recognition for speech technology Pattern recognition is one of topic in machine learning and as baseline for speech recognition. In general, we can construct speech recognition system in the following figure: From human speech, we should convert it into digital form, called discrete data. Some signal processing methods are applied to handle pre-processing such as removing noise from data. Now in pattern recognition we do perform speech recognition method. Researchers did some approaches such as computing using Hidden Markov Model (HMM) to identity sound related to word. Performing feature extraction in speech digital data is a part of pattern recognition activities. The output will be used as input in pattern recognition input. The output of pattern recognition can be applied as Speech-to-Text and Speech command on our IoT projects. Reviewing speech and sound modules for IoT devices In this section, we review various speech and sound modules which can be integrated into our MCU board. There are a lot of modules related to speech and sound processing. Each module has unique features which fits with your works. One of speech and sound modules is EasyVR 3 & EasyVR Shield 3 from VeeaR. You can review this module on http://www.veear.eu/introducing-easyvr-3-easyvr-shield-3/. Several languages already have been supported such as English (US), Italian, German, French, Spanish, and Japanese. You can see EasyVR 3 module in the following figure: EasyVR 3 board also is available as a shield for Arduino. If you buy an EasyVR Shield 3, you will obtain EasyVR board and its Arduino shield. You can see the form of EasyVR Shield 3 on the following figure: The second module is Emic 2. It was designed by Parallax in conjunction with Grand Idea Studio, http:// www.grandideastudio.com/, to make voice synthesis a total no-brainer. You can send texts to the module to generate human speech through serial protocol. This module is useful if you want to make boards speak. Further information about this module, you can visit and buy this module on https://www.parallax.com/product/30016. The following is a form of Emic-2 module: Summary We have learned some basic sound and voice processing. We also explore several sound and speech modules to integrate into your IoT project. We built a program to read sound intensity level at the first. Resources for Article: Further resources on this subject: Introducing IoT with Particle's Photon and Electron [article] Web Typography [article] WebRTC in FreeSWITCH [article]
Read more
  • 0
  • 0
  • 25394

article-image-information-gathering-and-vulnerability-assessment-0
Packt
08 Nov 2016
7 min read
Save for later

Information Gathering and Vulnerability Assessment

Packt
08 Nov 2016
7 min read
In this article by Wolf Halton and Bo Weaver, the authors of the book Kali Linux 2: Windows Penetration Testing, we try to debunk the myth that all Windows systems are easy to exploit. This is not entirely true. Almost any Windows system can be hardened to the point that it takes too long to exploit its vulnerabilities. In this article, you will learn the following: How to footprint your Windows network and discover the vulnerabilities before the bad guys do Ways to investigate and map your Windows network to find the Windows systems that are susceptible to exploits (For more resources related to this topic, see here.) In some cases, this will be adding to your knowledge of the top 10 security tools, and in others, we will show you entirely new tools to handle this category of investigation. Footprinting the network You can't find your way without a good map. In this article, we are going to learn how to gather network information and assess the vulnerabilities on the network. In the Hacker world this is called Footprinting. This is the first step to any righteous hack. This is where you will save yourself time and massive headaches. Without Footprinting your targets, you are just shooting in the dark. The biggest tool in any good pen tester's toolbox is Mindset. You have to have the mind of a sniper. You learn your targets habits and its actions. You learn the traffic flows on the network where your target lives. You find the weaknesses in your target and then attack those weaknesses. Search and destroy! In order to do good Footprinting, you have to use several tools that come with Kali. Each tool has it strong points and looks at the target from a different angle. The more views you have of your target, the better plan of attack you have. Footprinting will differ depending on whether your targets are external on the public network, or internal and on a LAN. We will be covering both aspects. Please read the paragraph above again, and remember you do not have our permission to attack these machines. Don't do the crime if you can't do the time. Exploring the network with Nmap You can't talk about networking without talking about Nmap. Nmap is the Swiss Army knife for network administrators. It is not only a great Footprinting tool, but also the best and cheapest network analysis tool any sysadmin can get. It's a great tool for checking a single server to make sure the ports are operating properly. It can heartbeat and ping an entire network segment. It can even discover machines when ICMP (ping) has been turned off. It can be used to pressure-test services. If the machine freezes under the load, it needs repairs. Nmap was created in 1997 by Gordon Lyon, who goes by the handle Fyodor on the Internet. Fyodor still maintains Nmap and it can be downloaded from http://insecure.org. You can also order his book Nmap Network Scanning on that website. It is a great book, well worth the price! Fyodor and the Nmap hackers have collected a great deal of information and security e-mail lists on their site. Since you have Kali Linux, you have a full copy of Nmap already installed! Here is an example of Nmap running against a Kali Linux instance. Open the terminal from the icon on the top bar or by clicking on the menu link Application | Accessories | Terminal. You could also choose the Root Terminal if you want, but since you are already logged in as Root, you will not see any differences in how the terminal emulator behaves. Type nmap -A 10.0.0.4 at the command prompt (you need to put in the IP of the machine you are testing). The output shows the open ports among 1000 commonly used ports. Kali Linux, by default, has no running network services, and so in this run you will see a readout showing no open ports. To make it a little more interesting, start the built-in webserver by typing /etc/init.d/apache2 start. With the web server started, run the Nmap command again: nmap -A 10.0.0.4 As you can see, Nmap is attempting to discover the operating system (OS) and to tell which version of the web server is running: Here is an example of running Nmap from the Git Bash application, which lets you run Linux commands on your Windows desktop. This view shows a neat feature of Nmap. If you get bored or anxious and think the system is taking too much time to scan, you can hit the down arrow key and it will print out a status line to tell you what percentage of the scan is complete. This is not the same as telling you how much time is left on the scan, but it does give you an idea what has been done: Zenmap Nmap comes with a GUI frontend called Zenmap. Zenmap is a friendly graphic interface for the Nmap application. You will find Zenmap under Applications | Information Gathering | Zenmap. Like many Windows engineers, you may like Zenmap more than Nmap: Here we see a list of the most common scans in a drop-down box. One of the cool features of Zenmap is when you set up a scan using the buttons, the application also writes out the command-line version of the command, which will help you learn the command-line flags used when using Nmap in command-line mode. Hacker tip Most hackers are very comfortable with the Linux Command Line Interface (CLI). You want to learn the Nmap commands on the command line because you can use Nmap inside automated Bash scripts and make up cron jobs to make routine scans much simpler. You can set a cron job to run the test in non-peak hours, when the network is quieter, and your tests will have less impact on the network's legitimate users. The choice of intense scan produces a command line of nmap -T4 -A -v. This produces a fast scan. The T stands for Timing (from 1 to 5), and the default timing is -T3. The faster the timing, the rougher the test, and the more likely you are to be detected if the network is running an Intrusion Detection System (IDS). The -A stands for All, so this single option gets you a deep port scan, including OS identification, and attempts to find the applications listening on the ports, and the versions of those applications.  Finally, the -v stands for verbose. -vv means very verbose: Summary In this article, we learned about penetration testing in a Windows environment. Contrary to popular belief, Windows is not riddled with wide-open security holes ready for attackers to find. We learned how to use nmap to obtain detailed statistics about the network, making it an indispensible tool in our pen testing kit. Then, we looked at Zenmap, which is a GUI frontend for nmap and makes it easy for us to view the network. Think of nmap as flight control using audio transmissions and Zenmap as a big green radar screen—that's how much easier it makes our work. Resources for Article: Further resources on this subject: Bringing DevOps to Network Operations [article] Installing Magento [article] Zabbix Configuration [article]
Read more
  • 0
  • 0
  • 17476

article-image-getting-things-done-tasks
Packt
08 Nov 2016
18 min read
Save for later

Getting Things Done with Tasks

Packt
08 Nov 2016
18 min read
In this article by Martin Wood, the author of the book Mastering ServiceNow Second Edition, we will see how data is stored, manipulated, processed, and displayed. With these tools, you can create almost any forms-based application. But building from the foundations up each time would be time consuming and repetitive. To help with this, the ServiceNow platform provides baseline functionality that allows you to concentrate on the parts that matter. (For more resources related to this topic, see here.) If Business Rules, tables, Client Scripts, and fields are the foundations of ServiceNow, the Task table, approvals, and the service catalog are the readymade lintels, elevator shafts, and staircases—the essential, tried and tested components that make up the bulk of the building. This article looks at the standard components behind many applications: The Task table is probably the most frequently used and important table in a ServiceNow instance. The functionality it provides is explored in this article, and several gotchas are outlined. How do you control these tasks? Using business rules is one way, but Graphical Workflow provides a drag-and-drop option to control your application. While you can bake in rules, you often need personal judgment. Approval workflows lets you decide whom to ask and lets them respond easily. The Service Catalog application is the go-to place to work with the applications that are hosted in ServiceNow. It provides the main interface for end users to interact with your applications. We will also briefly explore request fulfillment, which enables users to respond to queries quickly and effectively. Service Level Management lets you monitor the effectiveness of your services by setting timers and controlling breaches. Introducing tasks ServiceNow is a forms-based workflow platform. The majority of applications running on ServiceNow can be reduced to a single, simple concept: the management of tasks. A task in ServiceNow is work that is assigned to someone. You may ask your colleague to make you a cup of tea, or you may need to fix a leaking tap in a hotel guest's bedroom. Both of these are tasks. There are several parts to each of these tasks: A requester is someone who specifies the work. This could be a guest or even yourself. A fulfiller is someone who completes the work. This may, less frequently, be the same person as the requester. The fulfiller is often part of a group of people. Perhaps someone among them could work on the task. Information about the task itself is included—perhaps a description or a priority, indicating how important the task is. The status of the task—is it complete? Or is the fulfiller still working on it? There is a place to store notes to record what has happened. An identifier is a unique number to represent the task. The sys_id parameter is an identifier that is very specific and unique, but not very friendly! Links, references, and relationships to other records are present. Is this task a subdivision of another task, or is it connected to others? Perhaps you are moving house—that's a big job! But this could be broken down into separate individual tasks. Sometimes, a task may be as a simple as the equivalent of a Post-it note. Many of us have had something similar to "Frank called, could you ring him back?" attached to our desk. But you often need something that's more permanent, reportable, automated, and doesn't fall to the floor when someone walks by. Looking at the Task table The Task table in ServiceNow is designed to store, manage, and process tasks. It contains fields to capture all the details and a way to access them consistently and reliably. In addition, there is a whole host of functionality described in this article for automating and processing tasks more efficiently. The Product Documentation has an introductory article to the Task table: https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/administer/task-table/concept/c_TaskTable.html. It also covers some of the lesser-used elements, such as the task interceptor. To begin our journey, let's inspect the record representing the Task table. Navigate to System Definition > Tables, and then find the entry labelled Task.In the Helsinki version of ServiceNow, there are 65 fields in the Task table. There are also many other associated scripts, UI Actions, and linked tables. What do they all do? The Task table is designed to be extended, so you can of course add your own fields to capture the information you want. We’ll do just that in a later example. The important fields It is often instructive to view the fields that have been placed in the form by default. Click on the Show Form related link to take a look. Number: A unique identifier, it's a seven-digit number prefixed by TASK. This is constructed using a script specified in the dictionary entry. The script uses details in the Numbers [sys_number] table, accessible via System Definition > Number Maintenance. Assigned to: This field represents the fulfiller—the person who is working on the task. It is a reference field that points to the User table. The Assigned to field is also dependent on the Assignment group field. This means that if the Assignment group field is populated, you can only select users that belong to that particular group. Assignment group: This field is not in the Task form by default. You would typically want to add it using the Form Designer. Groups and users are discussed further in the article, but in short, it shows the team of people responsible for the task.Assignment group has been made a tree picker field. The Group table has a parent field, which allows groups to be nested in a hierarchical structure. If you click on the Reference Lookup icon (the magnifying glass), it will present a different interface to the usual list. This is controlled via an attribute in the dictionary. The following screenshot shows how an hierarchical group structure would be displayed using the tree picker: Active: This field represents whether a task is "operational". Closed tickets are not active, nor are tasks that are due to start. Tasks that are being worked on are active. There is a direct correlation between the state of a task and whether it is active. If you change the choices available for the State field, you may be tempted to write business rules to control the Active flag. Don't. There is a script called TaskStateUtil that does just this. Try not to cause a fight between the business rules! Refer to the wiki for more information on this: https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/app-store/dev_portal/API_reference/TaskStateUtil/concept/c_TaskActiveStateMgmtBusRule.html Priority: This is a choice field designed to give the person working on the task some idea as to which task they should complete first. It has a default value of 4. State: This is probably the most complex field in the table—so much so that it has its own section later in this article! It provides more details than the Active flag as to how the task is currently being processed. Parent: This is a reference field to another task record. A parent-to-child relationship is a one-to-many relationship. A child is generally taken to be a subdivision of a parent; a child task breaks down the parent task. A parent task may also represent a master task if there are several related tasks that need to be grouped together. Breadcrumbs can be added to a form to represent the parent relationship more visually. You can read up more about this here: https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/administer/form-administration/task/t_TaskParentBreadcrumbsFormatter.html Short description: Provide a quick summary of the task here. It is free text, but it should be kept short since it has an attribute in the dictionary that prevents it from being truncated in a list view. It is often used for reporting and e-mail notifications. Short description is a suggestion field. It will attempt to autocomplete when you begin typing: type in "issue" as an example. While you are free to add your own suggestions, it is not usually done. Check out the product documentation for more details: https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/administer/field-administration/concept/c_SuggestionFields.html Description: Here, you provide a longer description of the task. It is a simple large text field. Work notes: This field is one of the most well-used fields in the form. It is a journal_input field that always presents an empty text box. When the form is saved, the contents of a journal field are saved in a separate table called sys_journal_field. The journal_output fields such as Comments and Work notes are used to display them. Work notes and Comments are heavily integrated into Connect. When you follow and subsequently chat about a task, the appropriate field is automatically updated. We'll cover more about this later. Populating fields automatically The majority of the fields in the Task table aren't directly used. Instead, many fields are auto-populated, through logic actions such as business rules and as default values. Others are optional, available to be filled in if appropriate. All the data is then available for reporting or to drive processes. Some of the more notable fields are explained here, but this list is not exhaustive! The Approval field is discussed in more detail later in the article. There are several automatic fields, such as Approval set, that represent when a decision was made. A business rule populates Duration when the task becomes inactive. It records how long the task took in "calendar" time. There is also a Business Duration field to perform the same calculation in working hours, but it uses calendars, which are deprecated. The more modern equivalent is Service Levels, discussed at the end of this article. When a task is first created, a business rule records the logged-in user who performed the action and populates Opened by. When the task is set to inactive, it populates the Closed by field. Opened at and Closed at are date/time fields that also get populated. Company and Location are reference fields that provide extra detail about who the task is for and where it is. Location is dependent upon Company: if you populate the Company field, it will only show locations for that company. Location is also a tree picker field, like Assignment group. Due date is a date/time field to represent until when a task should be completed. Time worked is a specialized duration field that records how long the form has been on the screen for. If it is added to the form, a timer is shown. On saving, a business rule then populates the Time Worked [task_time_worked] table with the current user, how long it took, and any added comments. A formatter is an element that is added to a form (such as a field), but it uses a custom interface to present information. The activity formatter uses the Audit tables to present the changes made to fields on the task: who changed something, what they changed, and when. Recording room maintenance tasks At Gardiner Hotels, we have the highest standards of quality. Rooms must be clean and tidy, with all the light bulbs working, and no dripping taps! Let's create a table that will contain jobs for our helpful staff to complete. The process for dealing with a maintenance issue at Gardiner Hotels is straightforward; the need gets recorded in ServiceNow, and it gets assigned to the right team, who then work on it till it is resolved. Sometimes, a more complex issue will require the assistance of Cornell Hotel Services, a service company that will come equipped with the necessary tools. But to ensure that the team isn't used unnecessarily, management needs to approve any of their assignments. Have a look at the following figure, which represents what the process is: These requirements suggest the use of the Task table. In order to take advantage of its functionality, a new Maintenance table should be extended from it. Any time that a record is to be assigned to a person or a team, consider using the Task table as a base. There are other indicators too, such as the need for approvals. In general, you should always extend the Task table when supporting a new process. Table extension gives natural separation for different types of records, with the ability to create specific logic yet with inheritance. Now, let's create the Maintenance table by performing the following steps: Navigate to System Definition > Tables. Click New. Fill out the form with the following details, and Save when done: Label: Maintenance Extends table: Task Auto-number: <ticked>  (In the Controls tab) Then, using the Columns related list, create a new field using this data. Save when done. Column label: Room Type: Reference Reference: Room Click on Design Form in Related Links, and do the following: Add the Assignment Group, Approval, and Room fields and Activities (Filtered) Formatter from the selection of fields on the left. Remove the Configuration Item, Parent, and Active fields Click the Save button, then close the Form Design window once done. We want scripts to control the Approval field. So let's make that read-only. You should be in the Maintenance Table record. Find the Approval field in the Columns related list, and click on it. Once in the Dictionary record, you will notice the form is read-only and there is a message at the top of the screen saying you can’t edit this record since it is in a different scope. We do not want to take ownership of this field. Changing the settings of a field in the Task table will change it for all tables extended from Task. This is often not what you want! Contrarily, a Dictionary Override will only affect the selected table. Read more about this here: https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/administer/data-dictionary-tables/concept/c_DictionaryOverrides.html Instead, find the Dictionary Override tab, and click on New. Fill out the form with the following details, and Save. Table: Maintenance [x_hotel_maintenance] Override read only: <ticked> Read only: <ticked> If you navigate to Hotel > Maintenances and click New, the Maintenance form should now look something like this: Working with tasks You may be familiar with a work queue. It provides a list of items that you should complete, perhaps representing your work for the day. Often, this is achieved by assigning the item to a particular group. The members of that group then have access to the ticket and should do the necessary work to close it. In ServiceNow, this concept is represented in the Service Desk application menu. Open up the Service Desk Application Menu. You will find these options: My Work and My Groups Work. The former is a simple list view of the Task table with the following filters: Active is true. This filters out closed tasks. Assigned to is the current user, so if you are logged in as the user called System Administrator, you see tasks where the Assigned to field is set to System Administrator. State is not Pending, which filters out tasks that should not be worked on right now. The My Groups Work entry is very similar, but it shows tasks that haven't been given to a fulfiller and are still something that your group should deal with. It does this by showing tasks where the Assigned to field is empty and the Assignment group field is one of your groups. This means that when the My Work list is empty, you probably should get more work from My Groups Work. The My Work list shows all records that are derived from the Task table. This means you will see a mixture of records from many tables in this list. It is incredibly useful to have a "single pane of glass", where all your activities are in a single place, with consistent fields and data values. They can be manipulated easily and effectively: assign all your tickets to your colleague when you go on leave with a couple of clicks! Working without a queue Some platforms make the use of a work queue mandatory; the only way to look at a task is through your work queue. It is important to realize that ServiceNow does not have this restriction. The My Work list is a filtered list like any other. You do not have to be "assigned" the work before you can update or comment on it. There are many ways to find tasks to work on. This usually involves creating filters on lists. This may include tasks that have been marked as high priority or those that have been open for more than two weeks. In many IT organizations, a central service desk team is the single point of contact. They have the responsibility of ensuring tasks are completed quickly and effectively, regardless of who they are currently assigned to. ServiceNow makes this easy by ensuring tasks can be accessed in a variety of ways and not just through a work queue. Working socially Social media concepts are infiltrating many aspects of the IT industry, and ServiceNow is not immune. Some useful ideas have been pulled into the platform in an attempt to make working on tasks a more collaborative experience, ensuring the right people are involved. Chatting with Connect Connect Chat focuses on bringing fulfillers together. The UI16 Connect sidebar is easy to activate and use, letting you swap text, pictures, videos, and links easily and efficiently. The real benefit that ServiceNow brings with Connect is the ability to create a record conversation especially around tasks. This allows you to have a chat session that is connected to a particular record, allowing your conversation to be recorded and embedded. In Gardiner Hotels, the experienced staff probably already know how to deal with common maintenance tasks, and so by giving a newer team member easy access to them, our guests get better service. The Follow button is already available on every table extended from Task. But what’s special about Connect with the Task table is that the messages are added either as comments or work notes. While this is very useful for monitoring the progress of multiple tasks at the same time, record conversations are far less private: many users will have access to the activity log that shows the chat conversation. It probably isn’t a good idea to share a meme in the work notes of a high-priority task. Additional comments and work notes are discussed later in the article Communicating some more In addition to Connect Chat, ServiceNow provides several other options to share information. Connect Support allows requesters to ask for help via chat. Generally, the requester initiates the session through a self-service portal, by clicking on a button and entering a queue. A Service Desk Agent can then work with multiple fulfillers in the Connect window. Older versions of ServiceNow had chat functionality, but it was limited in capability—it did not support in-browser notifications, for instance. Help Desk Chat was also limited by having to use a custom page rather than having it integrated into the main interface. Both Chat and Connect use the same tables to store the message; they should not be used at the same time. More information is available in the Product Documentation https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/use/using-social-it/concept/c_HelpDeskChat.html Live Feed gives a Facebook-wall type interaction, where all types of users can read and add messages. This can be used as a self-service system, since the messages are searchable and referenceable by copying a link. It is a communication mechanism that allows users to be as involved as they'd like. Unlike e-mail, Live Feed is pull-style communication, where users must go to the right place to receive information. To ensure it gets checked regularly and is therefore most beneficial, the right culture must be cultivated in a company. Navigate to Collaborate > Live Feed to use it. Table Notification creates Chat and Live Feed messages automatically based on conditions and configurations. For example, during a service outage, the service desk may want an automatic communication to be sent out, alerting people proactively. Check out Collaborate > Feed Administration > Table Notifications. Summary In this article, we discussed looking at the Task table in which we covered the important fields, populating fields automatically and recording room maintenance tasks. Also covered working with Tasks in which we discussed working without a queue and working socially. Resources for Article: Further resources on this subject: VM, It Is Not What You Think! [article] Getting Started with Force.com [article] Overview of Certificate Management [article]
Read more
  • 0
  • 0
  • 1827

article-image-customizing-kernel-and-boot-sequence
Packt
08 Nov 2016
35 min read
Save for later

Customizing Kernel and Boot Sequence

Packt
08 Nov 2016
35 min read
In this article by Ivan Morgillo and Stefano Viola, the authors of the book Learning Embedded Android N Programming, you will learn about the kernel customization to the boot sequence. You will learn how to retrieve the proper source code for Google devices, how to set up the build environment, how to build your first custom (For more resources related to this topic, see here.) version of the Linux kernel, and deploy it to your device. You will learn about: Toolchain overview How to configure the host system to compile your own Linux kernel How to configure the Linux kernel Linux kernel overview Android boot sequence The Init process An overview of the Linux kernel We learned how Android has been designed and built around the Linux kernel. One of the reasons to choose the Linux kernel was its unquestioned flexibility and the infinite possibilities to adjust it to any specific scenario and requirement. These are the features that have made Linux the most popular kernel in the embedded industry. Linux kernel comes with a GPL license. This particular license allowed Google to contribute to the project since the early stages of Android. Google provided bug fixing and new features, helping Linux to overcome a few obstacles and limitations of the 2.6 version. In the beginning, Linux 2.6.32 was the most popular version for the most part of the Android device market. Nowadays, we see more and more devices shipping with the new 3.x versions. The following screenshot shows the current build for the official Google Motorola Nexus 6, with kernel 3.10.40: The Android version we created in the previouly was equipped with a binary version of the Linux kernel. Using an already compiled version of the kernel is the standard practice: as we have seen, AOSP provides exactly this kind of experience. As advanced users, we can take it a step further and build a custom kernel for our custom Android system. The Nexus family offers an easy entry into this world as we can easily obtain the kernel source code we need to build a custom version. We can also equip our custom Android system with our custom Linux kernel and we will have a full-customized ROM, tailored for our specific needs. In this book, we are using Nexus devices on purpose—Google is one of the few companies that formally make available the kernel source code. Even if every company producing and selling Android devices is forced by law to release the kernel source code, very few of them actually do it, despite all the GPL license rules. Obtaining the kernel Google provides the kernel source code and binary version for every single version of Android for every single device of the Nexus family. The following table shows where the binary version and the source code are located, ordered by device code name: Device Binary location Source location Build configuration shamu device/moto/shamu-kernel kernel/msm shamu_defconfig fugu device/asus/fugu-kernel kernel/x86_64 fugu_defconfig volantis device/htc/flounder-kernel kernel/tegra flounder_defconfig hammerhead device/lge/ hammerhead-kernel kernel/msm hammerhead_defconfig flo device/asus/flo-kernel/kernel kernel/msm flo_defconfig deb device/asus/flo-kernel/kernel kernel/msm flo_defconfig manta device/samsung/manta/kernel kernel/exynos manta_defconfig mako device/lge/mako-kernel/kernel kernel/msm mako_defconfig grouper device/asus/grouper/kernel kernel/tegra tegra3_android_defconfig tilapia device/asus/grouper/kernel kernel/tegra tegra3_android_defconfig maguro device/samsung/tuna/kernel kernel/omap tuna_defconfig toro device/samsung/tuna/kernel kernel/omap tuna_defconfig panda device/ti/panda/kernel kernel/omap panda_defconfig stingray device/moto/wingray/kernel kernel/tegra stingray_defconfig wingray device/moto/wingray/kernel kernel/tegra stingray_defconfig crespo device/samsung/crespo/kernel kernel/samsung herring_defconfig crespo4g device/samsung/crespo/kernel kernel/samsung herring_defconfig We are going to work with the Motorola Nexus 6, code name Shamu. Both the kernel binary version and the kernel source code are stored in a git repository. All we need to do is compose the proper URL and clone the corresponding repository. Retrieving the kernel's binary version In this section, we are going to obtain the kernel as a binary, prebuilt file. All we need is the previous table that shows every device model, with its codename and its binary location that we can use to compose the download of the URL. We are targeting Google Nexus 6, codename shamu with binary location: device/moto/shamu-kernel So, to retrieve the binary version of the Motorola Nexus 6 kernel, we need the following command: $ git clone https://android.googlesource.com/device/moto/shamu-kernel The previous command will clone the repo and place it in the shamu-kernel folder. This folder contains a file named zImage-dtb—this file is the actual kernel image that can be integrated in our ROM and flashed into our device. Having the kernel image, we can obtain the kernel version with the following command: $ $ dd if=kernel bs=1 skip=$(LC_ALL=C grep -a -b -o $'x1fx8bx08x00x00x00x00x00' kernel | cut -d ':' -f 1) | zgrep -a 'Linux version' Output: The previous screenshot shows the command output: our kernel image version is 3.10.40 and it has been compiled with GCC version 4.8 on October the the twenty-second at 22:49. Obtaining the kernel source code As for the binary version, the previous table is critical also to download the kernel source code. Targeting the Google Nexus 6, we create the download URL using the source location string for the device codename shamu: kernel/msm.git Once we have the exact URL, we can clone the GIT repository with the following command: $ git clone https://android.googlesource.com/kernel/msm.git Git will create an msm folder. The folder will be strangely empty—that's because the folder is tracking the master branch by default. To obtain the kernel for our Nexus 6, we need to switch to the proper branch. There are a lot of available branches and we can check out the list with the following command: $ git branch -a The list will show every single branch, targeting a specific Android version for a specific Nexus device. The following screenshot shows a subset of these repositories: Now that you have the branch name, for your device and your Android version, you just need to checkout the proper branch: $ git checkout android-msm-shamu-3.10-lollipop-release The following screenshot shows the expected command output: Setting up the toolchain The toolchain is the set of all the tools needed to effectively compile a specific software to a binary version, enabling the user to run it. In our specific domain, the toolchain allows us to create a system image ready to be flashed to our Android device. The interesting part is that the toolchain allows us to create a system image for an architecture that is different from our current one: odds are that we are using an x86 system and we want to create a system image targeting an ARM (Advanced RISC Machine) device. Compiling software targeting an architecture different from the one on our host system is called cross-compilation. The Internet offers a couple of handy solutions for this task—we can use the standard toolchain, available with the AOSP (Android Open Source Project) or we can use an alternative, very popular toolchain, the Linaro toolchain. Both toolchains will do the job—compile every single C/C++ file for the ARM architecture. As usual, even the toolchain is available as precompiled binary or as source code, ready to be compiled. For our journey, we are going to use the official toolchain, provided by Google, but when you need to explore this world even more, you could try out the binary version of Linaro toolchain, downloadable from www.linaro.org/download. Linaro toolchain is known to be the most optimized and performing toolchain in the market, but our goal is not to compare toolchains or stubbornly use the best or most popular one. Our goal is to create the smoothest possible experience, removing unnecessary variables from the whole building a custom Android system equation. Getting the toolchain We are going to use the official toolchain, provided by Google. We can obtain it with Android source code or downloading it separately. Having your trusted Android source code folder at hand, you can find the toolchain in the following folder: AOSP/prebuilts/gcc/linux-x86/arm/arm-eabi-4.8/ This folder contains everything we need to build a custom kernel—the compiler, the linker, and few more tools such as a debugger. If, for some unfortunate reason, you are missing the Android source code folder, you can download the toolchain using the following git command: $ git clone https://android.googlesource.com/platform/prebuilts/gcc/linux-x86/arm/arm-eabi-4.8 Preparing the host system To successfully compile our custom kernel, we need a properly configured host system. The requirements are similar to those we satisfied to build the whole Android system: Ubuntu Linux kernel source code Toolchain Fastboot Ubuntu needs a bit of love to accomplish this task: we need to install the ncurses-dev package: $ sudo apt-get install ncurses-dev Once we have all the required tools installed, we can start configuring the environment variables we need. These variables are used during the cross-compilation and can be set via the console. Fire up your trusted Terminal and launch the following commands: $ export PATH=<toolchain-path>/arm-eabi-4.8/bin:$PATH $ export ARCH=arm $ export SUBARCH=arm $ export CROSS_COMPILE=arm-eabi- Configuring the kernel Before being able to compile the kernel, we need to properly configure it. Every device in the Android repository has a specific branch with a specific kernel with a specific configuration to be applied. The table on page 2 has a column with the exact information we need—Build configuration. This information represents the parameter we need to properly configure the kernel build system. Let's configure everything for our Google Nexus 6. In your terminal, launch the following command: $ make shamu_defconfig This command will create a kernel configuration specific for your device. The following screenshot shows the command running and the final success message: Once the .config file is in place, you could already build the kernel, using the default configuration. As advanced users, we want more and that's why we will take full control of the system, digging into the kernel configuration. Editing the configuration could enable missing features or disable unneeded hardware support, to create the perfect custom kernel, and fit your needs. Luckily, to alter the kernel configuration, we don't need to manually edit the .config file. The Linux kernel provides a graphical tool that will allow you to navigate the whole configuration file structure, get documentation about the single configurable item, and prepare a custom configuration file with zero effort. To access the configuration menu, open your terminal, navigate to the kernel folder and launch the following command: $ make menuconfig The following screenshot shows the official Linux kernel configuration tool—no frills, but very effective: In the upper half of the screenshot, you can see the version of the kernel we are going to customize and a quick doc about how you can navigate all those menu items: you navigate using the arrow keys, you enter a subsection with the Enter key, you select or deselect an item using Y/N or Spacebar to toggle. With great power comes great responsibility, so be careful enabling and disabling features—check the documentation in menuconfig, check the Internet, and, most of all, be confident. A wrong configuration could cause a freeze during the boot sequence and this would force you to learn, to create a different configuration and try again. As a real-world example, we are going to enable the FTDI support. Future Technology Devices International or FTDI is a worldwide known semiconductor company, popular for its RS-232/TTL to USB devices. These devices come in very handy to communicate to embedded devices using a standard USB connection. To enable the FTDI support, you need to navigate to the right menu by following these steps: Device Drivers|USB support|USB Serial Converter support Once you reach this section, you need to enable the following item: USB FTDI Single Port Serial Driver The following screenshot shows the correctly selected item and gives you an idea of how many devices we could possibly support (this screen only shows the USB Serial Converter support): Once you have everything in place, just select Exit and save the configuration, as shown in the following screenshot: With the exact same approach, you can add every new feature you want. One important note, we added the FTDI package merging it into the kernel image. Linux kernel gives you the opportunity to make a feature available also as a module. A module is an external file, with .ko extension, that can be injected and loaded in the kernel at runtime. The kernel modules are a great and handy feature when you are working on a pure Linux system, but they are very impractical on Android. With the hope of having a modular kernel, you should code yourself the whole module loading system, adding unnecessary complexity to the system. The choice we made of having the FTDI feature inside the kernel image penalizes the image from a size point of view, but relieves us from the manual management of the module itself. That's why the common strategy is to include every new feature we want right into the kernel core. Compiling the kernel Once you have a properly configured environment and a brand new configuration file, you just need one single command to start the building process. On your terminal emulator, in the kernel source folder, launch: $ make The make command will wrap up the necessary configuration and will launch the compiling and assembling process. The duration of the process heavily depends on the performance of your system: it could be one minute or one hour. As a reference, an i5 2.40 GHz CPU with 8 GB of RAM takes 5-10 minutes to complete a clean build. This is incredibly quicker than compiling the whole AOSP image, as you can see, due to the different complexity and size of the code base. Working with non-Google devices So far, we have worked with Google devices, enjoying the Google open-source mindset. As advanced users, we frequently deal with devices that are not from Google or that are not even a smartphone. As a real-world example, we are going to use again a UDOO board: a single-board computer that supports Ubuntu or Android. For the time being, the most popular version of UDOO is the UDOO Quad and that's the version we are targeting. As for every other device, the standard approach is to trust the manufacturer's website to obtain kernel source code and any useful documentation for the process: most of all, how to properly flash the new kernel to the system. When working with a custom kernel, the procedure is quite consolidated. You need the source code, the toolchain, a few configuration steps, and, maybe, some specific software package to be installed on to your host system. When it comes to flashing the kernel, every device can have a different procedure. This depends on how the system has been designed and which tools the manufacturing team provides. Google provides fastboot to flash our images to our devices. Other manufactures usually provide tools that are similar or that can do similar things with little effort. The UDOO development team worked hard to make the UDOO board fully compatible with fastboot—instead of forcing you to adjust to their tools, they adjusted their device to work with the tools you already know. They tuned up the board's bootloader and you can now flash the boot.img using fastboot, like you were flashing a standard Google Android device. To obtain the kernel, we just need to clone a git repository. With your trusted terminal, launch the following command: $ git clone http://github.com/UDOOBoard/Kernel_Unico kernel Once we have the kernel, we need to install a couple of software packages in our Ubuntu system to be able to work with it. With the following command, everything will be installed and put in place: $ sudo apt-get install build-essential ncurses-dev u-boot-tools Time to pick a toolchain! UDOO gives you a few possibilities—you can use the same toolchain you used for the Nexus 6 or you can use the one provided by the UDOO team itself. If you decide to use the UDOO official toolchain, you can download it with a couple of terminal commands. Be sure you have already installed curl. If not, just install it with the following command: $ sudo apt-get install curl Once you have curl, you can use the following command to download the toolchain: $ curl http://download.udoo.org/files/crosscompiler/arm-fsl-linux-gnueabi.tar.gz | tar -xzf Now, you have everything in place to launch the build process: $ cd kernel $ make ARCH=arm UDOO_defconfig The following is the output: /sites/default/files/Article-Images/B04293_05_09.png The previous screenshot shows the output of the configuration process. When the default .config file is ready, you can launch the build process with the following command: $ make –j4 CROSS_COMPILE ../arm-fsl-linux-gnueabi/bin/arm-fsl-linux-gnueabi- ARCH=arm uImage modules When the build process is over, you can find the kernel image in the arch folder: $ arch/arm/boot/uImage As for the Nexus 6, we can customize the UDOO kernel using menuconfig. From the kernel source folder, launch the following command: $ make ARCH=arm menuconfig The following screenshot shows the UDOO kernel configuration menu. It's very similar to the Nexus 6 configuration menu. We have the same combination of keys to navigate, select and deselect features, and so on: Working with UDOO, the same warnings we had with the Nexus 6 apply here too—be careful while removing components from the kernel. Some of them are just meant to be there to support specific hardware, some of them, instead, are vital for the system to boot. As always, feel free to experiment, but be careful about gambling! This kind of development device makes debugging the kernel a bit easier compared to a smartphone. UDOO, as with a lot of other embedded development boards, provides a serial connection that enables you to monitor the whole boot sequence. This comes in handy if you are going to develop a driver for some hardware and you want to integrate it into your kernel or even if you are simply playing around with some custom kernel configuration. Every kernel and boot-related message will be printed to the serial console, ready to be captured and analyzed. The next screenshot shows the boot sequence for our UDOO Quad board: As you can see, there is plenty of debugging information, from the board power-on to the Android system prompt. Driver management Since version 2.6.x, Linux gives the developer the opportunity to compile parts of the kernel as separated modules that can be injected into the core, to add more features at runtime. This approach gives flexibility and freedom: there is no need to reboot the system to enjoy new features and there is no need to rebuild the whole kernel if you only need to update a specific module. This approach is widely use in the PC world, by embedded devices such as routers, smart TVs, and even by our familiar UDOO board. To code a new kernel module is no easy task and it's far from the purpose of this book: there are plenty of books on the topic and most of the skill set comes from experience. In these pages, you are going to learn about the big picture, the key points, and the possibilities. Unfortunately, Android doesn't use this modular approach: every required feature is built in a single binary kernel file, for practical and simplicity reasons. In the last few years there has been a trend to integrate into the kernel even the logic needed for Wi-Fi functionality, that was before it was loaded from a separated module during the boot sequence. As we saw with the FTDI example in the previous pages, the most practical way to add a new driver to our Android kernel is using menuconfig and building the feature as a core part of the kernel. Altering the CPU frequency Overclocking a CPU is one of the most loved topics among advanced users. The idea of getting the maximum amount of powerfrom your device is exciting. Forums and blogs are filled with discussions about overclocking and in this section we are going to have an overview and clarify a few tricky aspects that you could deal with on your journey. Every CPU is designed to work with a specific clock frequency or within a specific frequency range. Any modern CPU has the possibility to scale its clock frequency to maximize performance when needed and power consumption when performance is not needed, saving precious battery in case of our beloved mobile devices. Overclocking, then, denotes the possibility to alter this working clock frequency via software, increasing it to achieve performance higher than the one the CPU was designed for. Contrary to what we often read on unscrupulous forum threads or blogs, overclocking a CPU can be a very dangerous operation: we are forcing the CPU to work with a clock frequency that formally hasn't been tested. This could backfire on us with a device rebooting autonomously, for its own protection, or we could even damage the CPU, in the worst-case scenario. Another interesting aspect of managing the CPU clock frequency is the so-called underclock. Leveraging the CPU clock frequency scaling feature, we can design and implement scaling policies to maximize the efficiency, according to CPU load and other aspects. We could, for instance, reduce the frequency when the device is idle or in sleep mode and push the clock to the maximum when the device is under heavy load, to enjoy the maximum effectiveness in every scenario. Pushing the CPU management even further, lots of smartphone CPUs come with a multicore architecture: you can completely deactivate a core if the current scenario doesn't need it. The key concept of underclocking a CPU is adding a new frequency below the lowest frequency provided by the manufacturer. Via software, we would be able to force the device to this frequency and save battery. This process is not riskless. We could create scenarios in which the device has a CPU frequency so low that it will result in an unresponsive device or even a frozen device. As for overclocking, these are unexplored territories and only caution, experience and luck will get you to a satisfying result. An overview of the governors Linux kernel manages CPU scaling using specific policies called governors. There are a few pre-build governors in the Linux kernel, already available via menuconfig, but you can also add custom-made governors, for your specific needs. The following screenshot shows the menuconfig section of Google Nexus 6 for CPU scaling configuration: As you can see, there are six prebuild governors. Naming conventions are quite useful and make names self-explanatory: for instance, the performance governor aims to keep the CPU always at maximum frequency, to achieve the highest performance at every time, sacrificing battery life. The most popular governors on Android are definitely the ondemand and interactive governors: these are quite common in many Android-based device kernels. Our reference device, Google Nexus 6, uses interactive as the default governor. As you would expect, Google disallows direct CPU frequency management, for security reasons. There is no quick way to select a specific frequency or a specific governor on Android. However, advanced users can satisfy their curiosity or their needs with a little effort. Customizing the boot image So far, you learned how to obtain the kernel source code, how to set up the system, how to configure the kernel, and how to create your first custom kernel image. The next step is about equipping your device with your new kernel. To achieve this, we are going to analyze the internal structure of the boot.img file used by every Android device. Creating the boot image A custom ROM comes with four .img files, necessary to create a working Android system. Two of them (system.img and data.img) are compressed images of a Linux compatible filesystem. The remaining two files (boot.img and recovery.img) don't contain a standard filesystem. Instead, they are custom image files, specific to Android. These images contain a 2KB header sector, the kernel core, compressed with gzip, a ramdisk, and an optional second stated loader. Android provides further info about the internal structure of the image file in the boot.img.h file contained in the mkbootimg package in the AOSP source folder. The following screenshot shows a snippet of the content of this file: As you can see, the image contains a graphical representation of the boot.img structure. This ASCII art comes with a deeper explanation of sizes and pages. To create a valid boot.img file, you need the kernel image you have just built and a ramdisk. A ramdisk is a tiny filesystem that is mounted into the system RAM during the boot time. A ramdisk provides a set of critically important files, needed for a successful boot sequence. For instance, it contains the init file that is in charge of launching all the services needed during the boot sequence. There are two main ways to generate a boot image: We could use the mkbootimg tool We could use the Android build system Using mkbootimg gives you a lot of freedom, but comes with a lot of complexity. You would need a serious amount of command-line arguments to properly configure the generating system and create a working image. On the other hand, the Android build system comes with the whole set of configuration parameters already set and ready to go, with zero effort for us to create a working image. Just to give you a rough idea of the complexity of mkbootimg, the following screenshot shows an overview of the required parameters: Playing with something so powerful is tempting, but, as you can see, the amount of possible wrong parameters passed to mkbootimg is large. As pragmatic developers, dealing with mkbootimg is not worth the risk at the moment. We want the job done, so we are going to use the Android build system to generate a valid boot image with no effort. All that you need to do is export a new environment variable, pointing to the kernel image you have created just a few pages ago. With your trusted terminal emulator, launch: $ export TARGET_PREBUILT_KERNEL=<kernel_src>/arch/arm/boot/zImage-dtb Once you have set and exported the TARGET_PREBUILT_KERNEL environment variable, you can launch: $ make bootimage A brand new, fully customized, boot image will be created by the Android build system and will be placed in the following folder: $ target/product/<device-name>/boot.img With just a couple of commands, we have a brand new boot.img file, ready to be flashed. Using the Android build system to generate the boot image is the preferred way for all the Nexus devices and for all those devices, such as the UDOO, that are designed to be as close as possible to an official Google device. For all those devices on the market that are compliant to this philosophy, things start to get tricky, but not impossible. Some manufactures take advantage of the Apache v2 license and don't provide the whole Android source code. You could find yourself in a scenario where you only have the kernel source code and you won't be able to leverage the Android build system to create your boot image or even understand how boot.img is actually structured. In these scenarios, one possible approach could be to pull the boot.img from a working device, extract the content, replace the default kernel with your custom version, and recreate boot.img using mkbootimg: easier said than done. Right now, we want to focus on the main scenario, dealing with a system that is not fighting us. Upgrading the new boot image Once you have your brand new, customized boot image, containing your customized kernel image, you only need to flash it to your device. We are working with Google devices or, at least, Google-compatible devices, so you will be able to use fastboot to flash your boot.img file to your device. To be able to flash the image to the device, you need to put the device in fastboot mode, also known as bootloader mode. Once your device is in fastboot mode, you can connect it via USB to your host computer. Fire up a terminal emulator and launch the command to upgrade the boot partition: $ sudo fastboot flash boot boot.img In a few seconds, fastboot will replace the content of the device boot partition with the content of your boot.img file. When the flashing process is successfully over, you can reboot your device with: $ sudo fastboot reboot The device will reboot using your new kernel and, thanks to the new USB TTL support that you added a few pages ago, you will be able to monitor the whole boot sequence with your terminal emulator. Android boot sequence To fully understand all Android internals, we are going to learn how the whole boot sequence works: from the power-on to the actual Android system boot. The Android boot sequence is similar to any other embedded system based on Linux: in a very abstract way, after the power-on, the system initializes the hardware, loads the kernel, and finally the Android framework. Any Linux-based system undergoes a similar process during its boot sequence: your Ubuntu computer or even your home DSL router. In the next sections, we are going to dive deeper in to these steps to fully comprehend the operating system we love so much. Internal ROM – bios When you press the power button on your device, the system loads a tiny amount of code, stored inside a ROM memory. You can think about this as an equivalent of the BIOS software you have in your PC. This software is in charge of setting up all the parameters for CPU clock and running the RAM memory check. After this, the system loads the bootloader into memory and launches it. An overview of bootloader So far, the bootloader has been loaded into the RAM memory and started. The bootloader is in charge of loading the system kernel into the RAM memory and launching it, to continue the boot sequence. The most popular bootloader software for Android devices is U-Boot, the Universal Bootloader. U-Boot is widely used in all kinds of embedded systems: DSL routers, smart TVs, infotainment systems, for example. U-boot is open source software and its flexibility to be customized for any device is definitely one of the reasons for its popularity. U-boot's main task is to read the kernel image from the boot partition, load it into the RAM memory, and run it. From this moment on, the kernel is in charge of finishing the boot sequence. You could think about U-boot on Android like GRUB on your Ubuntu system: it reads the kernel image, decompresses it, loads it into the RAM memory, and executes it. The following diagram gives you a graphical representation of the whole boot sequence as on an embedded Linux system, an Android system, and a Linux PC: The kernel After the bootloader loads the kernel, the kernel's first task is to initialize the hardware. With all the necessary hardware properly set up, the kernel mounts the ramdisk from boot.img and launches init. The Init process In a standard Linux system, the init process takes care of starting all the core services needed to boot the system. The final goal is to complete the boot sequence and start the graphical interface or the command line to make the system available to the user. This whole process is based on a specific sequence of system scripts, executed in a rigorous order to assure system integrity and proper configuration. Android follows the same philosophy, but it acts in a different way. In a standard Android system, the ramdisk, contained in the boot.img, provides the init script and all the scripts necessary for the boot. The Android init process consists of two main files: init.rc init.${ro.hardware}.rc The init.rc file is the first initialization script of the system. It takes care of initializing those aspects that are common to all Android systems. The second file is very hardware specific. As you can guess, ${ro.hardware} is a placeholder for the reference of a particular hardware where the boot sequence is happening. For instance, ${ro.hardware} is replaced with goldfinsh in the emulator boot configuration. In a standard Linux system, the init sequence executes a set of bash scripts. These bash scripts start a set of system services. Bash scripting is a common solution for a lot of Linux systems, because it is very standardized and quite popular. Android systems use a different language to deal with the initialization sequence: Android Init Language. The Android init language The Android team chose to not use Bash for Android init scripts, but to create its own language to perform configurations and services launches. The Android Init Language is based on five classes of statements: Actions Commands Services Options Imports Every statement is line-oriented and is based on specific tokens, separated by white spaces. Comment lines start with a # symbol. Actions An Action is a sequence of commands bound to a specific trigger that's used to execute the particular action at a specific moment. When the desired event happens, the Action is placed in an execution queue, ready to be performed. This snippet shows an example of an Action statement: on <trigger> [&& <trigger>]* <command> <command> <command> Actions have unique names. If a second Action is created with the same name in the same file, its set of commands is added to the first Action commands, set and executed as a single action. Services Services are programs that the init sequence will execute during the boot. These services can also be monitored and restarted if it's mandatory they stay up. The following snippet shows an example of a service statement: service <name> <pathname> [ <argument> ]* <option> <option> ... Services have unique names. If in the same file, a service with a nonunique name exists, only the first one is evaluated as valid; the second one is ignored and the developer is notified with an error message. Options Options statements are coupled with services. They are meant to influence how and when init manages a specific service. Android provides quite an amount of possible options statements: critical: This specifies a device-critical service. The service will be constantly monitored and if it dies more than four times in four minutes, the device will be rebooted in Recovery Mode. disabled: This service will be in a default stopped state. init won't launch it. A disabled service can only be launched manually, specifying it by name. setenv <name> <value>: This sets an environment variable using name and value. socket <name> <type> <perm> [ <user> [ <group> [ <seclabel> ] ] ]: This command creates a Unix socket, with a specified name, (/dev/socket/<name>) and provides its file descriptor the specified service. <type> specifies the type of socket: dgram, stream, or seqpacket. Default <user> and <group> are 0. <seclabel> specifies the SELinx security context for the created socket. user <username>: This changes the username before the service is executed. The default username is root. group <groupname> [ <groupname> ]*: This changes the group name before the service is executed. seclabel <seclabel>: This changes the SELinux level before launching the service. oneshot: This disables the service monitoring and the service won't be restarted when it terminates. class <name>: This specifies a service class. Classes of services can be launched or stopped at the same time. A service with an unspecified class value will be associated to the default class. onrestart: This executes a command when the service is restarted. writepid <file...>: When a services forks, this option will write the process ID (PID) in a specified file. Triggers Triggers specify a condition that has to be satisfied to execute a particular action. They can be event triggersor property triggers. Event triggers can be fired by the trigger command or by the QueueEventTrigger() function. The example event triggers are boot and late-init. Property triggers can be fired when an observed property changes value. Every Action can have multiple Property triggers, but only one Event trigger; refer to the following code for instance: on boot && property_a=b This Action will be executed when the boot event is triggered and the property a is equal to b. Commands The Command statement specifies a command that can be executed during the boot sequence, placing it in the init.rc file. Most of these commands are common Linux system commands. The list is quite extensive. Let's look at them in detail: bootchart_init: This starts bootchart if it is properly configured. Bootchart is a performance monitor and can provide insights about the boot performance of a device. chmod <octal-mode-permissions> <filename>: This changes file permissions. chown <owner> <group> <filename>: This changes the owner and the group for the specified file. class_start <serviceclass>: This starts a service specified by its class name. class_stop <serviceclass>: This stops and disables a service specified by its class name. class_reset <serviceclass>: This stops a service specified by its class name. It doesn't disable the service. copy <src> <dst>: This copies a source file to a new destination file. domainname <name>: This sets the domain name. enable <servicename>: This starts a service by its name. If the service is already queued to be started, then it starts the service immediately. exec [<seclabel>[<user>[<group> ]* ]] -- <command> [ <argument> ]*: This forks and executes the specified command. The execution is blocking: no other command can be executed in the meantime. export <name> <value>: This sets and exports an environment variable. hostname <name>: This sets the hostname. ifup <interface>: This enables the specified network interface. insmod <path>: This loads the specified kernel module. load_all_props: This loads all the system properties. load_persist_props: This loads the persistent properties, after the successful decryption of the /data partition. loglevel <level>: This sets the kernel log level. mkdir <path> [mode] [owner] [group]: This creates a folder with the specified name, permissions, owner, and group. The defaults are 755 as permissions, and root as owner and group. mount_all <fstab>: This mounts all the partitions in the fstab file. mount <type> <device> <dir> [ <flag> ]* [<options>]: This mounts a specific device in a specific folder. A few mount flags are available: rw, ro, remount, noatime, and all the common Linux mount flags. powerctl: This is used to react to changes of the sys.powerctl system parameter, critically important for the implementation of the reboot routing. restart <service>: This restarts the specified service. rm <filename>: This deletes the specified file. rmdir <foldername>: This deletes the specified folder. setpropr <name> <value>: This sets the system property with the specified name with the specified value. start <service>: This starts a service. stop <service>: This stops a service. swapon_all <fstab>: This enables the swap partitions specified in the fstab file. symlink <target> <path>: This creates a symbolic link from the target file to the destination path. sysclktz <mins_west_of_gtm>: This sets the system clock. trigger <event>: This programmatically triggers the specified event. wait <filename > [ <timeout> ]: This monitors a path for a file to appear. A timeout can be specified. If not, the default timeout value is 5 seconds. write <filename> <content>: This writes the specified content to the specified file. If the file doesn't exist, it creates the file. If the file already exists, it won't append the content, but it will override the whole file. Imports Imports specify all the external files that are needed in the current file and imports them: import <path> The previous snippet is an example of how the current init script can be extended, importing an external init script. path can be a single file or even a folder. In case path is a folder, all the files that exists in the first level of the specified folder will be imported. The command doesn't act recursively on folders: nested folders must be imported programmatically one by one. Summary In this article, you learned how to obtain the Linux kernel for your device, how to set up your host PC to properly build your custom kernel, how to add new features to the kernel, build it, package it, and flash it to your device. You learned how the Android boot sequence works and how to manipulate the init scripts to customize the boot sequence. Resources for Article: Further resources on this subject: Virtualization [article] Building Android (Must know) [article] Network Based Ubuntu Installations [article]
Read more
  • 0
  • 0
  • 16342

article-image-managing-your-oneops
Packt
08 Nov 2016
14 min read
Save for later

Managing Your OneOps

Packt
08 Nov 2016
14 min read
In this article by Nilesh Nimkar, the author of the book Practical OneOps, we will look at a few steps that will help you manage both kinds of installations. (For more resources related to this topic, see here.) Upgrading OneOps with minimal downtime As mentioned preceding you might be running a stand-alone instance of OneOps or an enterprise instance. For both types, you will have to use different strategies to update the OneOps code. In general, it is easier and straightforward to update a standalone instance rather than a enterprise instance. Your strategies to update and the branch or tag of code that you will use will also differ based on kind of system that you have. Updating a standalone OneOps install If you have a standalone installation, it's possible you created it in one of the several ways. You either installed it using Vagrant or you used the Amazon Machine Images (AMI). It is also possible that you built your own installation on another cloud like Google, Azure or Rackspace. Irrespective of the way your instance of OneOps was created, the steps to upgrade it remains the same and are very simple. When you setup OneOps two scripts are run by the setup process, oo-preqs.sh and oo_setup.sh. Once an instance is setup, both these scripts are also copied to /home/oneops directory on the server. Of these two scripts, oo_setup.sh can be used to update an OneOps standalone install at any time. You need an active internet connection to upgrade OneOps. You can see the list of releases in the OneOps git repository for any of the OneOps components. For example releases for sensor can be seen at https://github.com/oneops/sensor/releases Release candidates have a RC1 at the end and stable releases have — STABLE at the end. If you want to install a particular release, like 16.09.29-RC1 then invoke the script and pass the release number as the argument. Passing master will build the master branch and will build and install the latest and greatest code. This is great to get all the latest features and bug fixes but this will also make your installation susceptible to new bugs. ./oo_setup.sh master Ensure that the script is invoked as root. Instead of running as sudo, it helps if you are logged in as root with: sudo su - After the script is invoked, it will do a bunch of things to upgrade your OneOps. First it sets three variables: OO_HOME which is set to /home/oneops BUILD_BASE which is set to /home/oneops/build GITHUB_URL which is set to https://github.com/oneops All the builds will take place under BUILD_BASE. Under the BUILD_BASE, the script then checks if dev-tools exists. If it does, it updates it to the latest code by doing a git pull on it. If it does not, then it does a git clone and gets a latest copy from GitHub. The dev-tools repository has a set of tools for core OneOps developers. The most important of which are under the setupscripts sub directory. The script then copies all the scripts from under the setupscripts sub directory to OO_HOME directory. Once done it invokes the oneops_build.sh script. If you passed any build tag to oo_setup.sh script, that tag is passed on to oneops_build.sh script as is. The oneops_build.sh script is a control script so the speak. What it means is in turn it will invoke a bunch of other scripts which will shutdown services, pull and build the OneOps code, install the built code and then restart the services once done. Most of the scripts that run henceforth set and export a few variables again. Namely OO_HOME, BUILD_BASE and GITHUB_URL. Another variable that is set is SEARCH_SITE whose value is always set to localhost. The first thing the script does is to shutdown Cassandra on the server to conserve memory on the server and reduce the load during the build since the build itself is very memory and CPU intensive. It also marks the start time of the script. Next it runs the install_build_srvr.sh script by passing the build tag that was passed to the original script. This is a very innovative script which does a quick installation of Jenkins, installs various plugins to Jenkins, runs various jobs to do builds, monitors the jobs for either success or failure and then shuts down Jenkins all in an automated fashion. If you have your own Jenkins installation, I highly recommend you read through this script as this will give you great ideas for your own automation of installation, monitoring and controlling of Jenkins. As mentioned in preceding, the install_build_srvr.sh script sets a bunch of variables first. It then clones the git repository called build-wf from the BUILD_BASE if it does not already exist. If it does exist, it does a git pull to update the code. Outside of a Docker container, the build-wf is a most compact Jenkins installation you will find. You can check it out at the URL following: https://github.com/oneops/build-wf It consists of a Rakefile to download and install Jenkins and its associated plugins, a config.xml that configures it, a plugins.txt that provides a list of plugins and a jobs directory with all the associated jobs in it. If the script detects a Jenkins server that is already present and a build is already in progress, it cleanly attempts to shut down the existing Jenkins server. It then attempts to install the latest Jenkins jar using the following command: rake install Once the installation is done a dist directory is created to store the resulting build packages. After setting the path to local maven, the server is brought up using the command following: rake server If you did not specify what revision to build, the last stable build is used. The actual release revision itself is hardcoded in this script. Every time a stable release is made, this file is manually changed and the release version is updated and the file is checked in. After the server comes up, it is available at port 3001 if you are running on any cloud. If you are running a Vagrant setup, it will be mapped to port 3003. If you connect to one of these ports on your machines via your browser, you should be able to see your Jenkins in action. The script calls the job oo-all-oss via curl using Jenkins REST API. The oo-all-oss is a master job that in turn builds all of OneOps components, including the database components. Even the installation of Jenkins plugins is done via a Jenkins job called Jenkins-plugin. The script then goes into an infinite loop and keeps checking the job status till the jobs are done. Once all jobs are finished or if an error is encountered, the server is shutdown using rake stop Once the build completes, the Cassandra sever is started again. Once it starts the Cassandra service the script start deploying all the built artifacts. The first artifact to be deployed is the database artifact. For that it runs the init_db.sh script. This script first creates the three main schemas, namely kloopzapp, kloopzdb and activitidb. Since you are upgrading an existing installation, this script may very well give an error. Next the script will run a bunch of database scripts which will create tables, partitions, functions and other ddl statements. Again since you are upgrading any errors here can be safely ignored. Next to be installed, is the display. The script backs up the current display from /opt/oneops/app to /opt/oneops/~app in case a rollback is needed. It then copies and untars the newly built package. Using rake, it detects if the Rails database is setup. If the database version is returned as 0 then rake db:setup command is run to setup a brand new database. Or else rake db:migrate command is run to migrate and upgrade the database. The next component to get installed is amq. This is done by calling the script deploy_amq.sh. The amq gets installed in the directory /opt/activemq. Before installation the activemq service is stopped. The script the copies over the amq-config and amqplugin-fat jar. It also takes a backup of the old configuration and overwrites it with new configuration. After that the service is started again. After AMQ, the script installs all the webapps under Tomcat. Tomcat itself is installed under /usr/local/tomcat7 and all the webapps get installed under /usr/local/tomcat7/webapps. Before copying over all the war files, the tomcat service is stopped. The script also creates directories that the controller, publisher and transmitter rely on for successful operation. Once the wars are copied Tomcat service is started again. Tomcat, at this point will automatically deploy the services. After the web services are deployed the script deploys the search service. Before deployment, the search-consumer service is stopped. The search.jar and the startup script is then copied to /opt/oneops-search directory and the search-consumer service is started again. As a final step in deployment, the OneOps Admin gem is deployed. The OneOps Admin gem contains two commands that help administer OneOps from the backend. These commands are inductor and circuit. The script then either updates the circuit repository if it exists or clones it if it does not from https://github.com/oneops/circuit-oneops-1 and installs it. After successfully installing the circuit an inductor is created using the shared queue using the command following. This command is also a great reference for you should you wish to create your own inductors for testing. inductor add --mqhost localhost --dns on --debug on --daq_enabled true --collector_domain localhost --tunnel_metrics on --perf_collector_cert /etc/pki/tls/logstash/certs/logstash-forwarder.crt --ip_attribute public_ip --queue shared --mgmt_url http://localhost:9090 --logstash_cert_location /etc/pki/tls/logstash/certs/logstash-forwarder.crt --logstash_hosts vagrant.oo.com:5000 --max_consumers 10 --local_max_consumers 10 --authkey superuser:amqpass --amq_truststore_location /opt/oneops/inductor/lib/client.ts --additional_java_args "" --env_vars "" After installing the inductor, the display service is started and the standalone OneOps upgrade is complete. Updating a Enterprise OneOps Install Updating an enterprise OneOps install takes a different approach for a few different reasons. First of all in an enterprise install all the services get installed on their own instances. Secondly, since an enterprise install caters to an entire enterprise, stability, availability and scalability are always a issue. So here are a few things that you should remember before you upgrade your enterprise install. Ensure you have your own Jenkins build server and it uploads the artifacts to your own Nexus repository. Ensure this Nexus repository is configured in the OneOps that manages your enterprise OneOps installation. Ensure you use a stable build and not a Release Candidate or the master build. This way you will have a well tested build for your enterprise. Make sure your backup server is configured and OneOps is being regularly backed up. Although the downtime should be minimal to none, make sure you do the upgrade during the least busy time to avoid any unforeseen events. If you have more than one OneOps installations, it is prudent to direct traffic to the second installation while one is being updated. With these things in mind the sequence for updating the various components is pretty much the same as updating a standalone OneOps install. However the steps involved are a bit different. The first thing you need to do, as mentioned preceding, is to choose an appropriate stable release that you want to deploy. Once you choose that, go to OneOps that manages your enterprise installation and click on the OneOps assembly. Select Design from the left hand side menu and then select Variables from the center screen. From the numerous variables you see, the one that you want to modify is called Version. Click on it and then click Edit in the upper right hand corner. Click Save. Once the changes are saved, you can go ahead and commit your changes. You will notice that all the components derive their local version variable from the global version variable. At this point if you click on Transition, and attempt a deployment, OneOps will generate a deployment plan which will have the latest revision of all the components that need the upgrade. Go ahead and click Deploy. OneOps should do the rest. Configuring database backups As seen so far, OneOps has a complex architecture and relies on many databases to provide optimum functionality as we have seen before. Again as with deployment, for database backup the steps needed to backup a single machine install and an enterprise install are different. Backup a standalone OneOps install For a standalone install the three main postgres databases you need to backup are activitidb, kloopzapp, and kloopzdb. You can access these databases directly by logging in to your OneOps server and then doing a sudo as the postgres user. # sudo su – postgres -bash-4.2$ psql Postgres=# l Once you issue these commands you can see these databases listed along with the default postgres database. Now you can design chef recipes to take backups or install puppet or ansible and automate the backup process. However in accordance with the KISS principle the simplest way you can setup backups are to use the built in postgres command The pg_dump command for single database backup or pg_dumpall for a all database backups. You can add a cron job to run these commands nightly and another cron job to scp the dumped files and delete the local copies. The KISS is an acronym coined by the US Navy in 1960 for a design principle that states that systems work best if the design is kept simple and unnecessary complexity is avoided. Please look it up online. Search for KISS Principle. As time goes by your database size will also increase. To tackle that you can pipe your backup commands directly to a compression program. pg_dumpall | gzip filename.gz Similarly you can restore the database with using the exact reverse of that command. gunzip filename.gz | pg_restore Backup an enterprise OneOps Install Again an enterprise OneOps install, as opposed to a standalone OneOps install comes with backups built in. To make the backups work, you have to setup a few things correctly to begin with. Firstly you have to setup the BACKUP-HOST global variable to point to a host that has plenty of storage attached to it. Once the variable is set, the value trickles down to the database components as local variables derived from the global variable. All backups taken are then copied to this host. For example, following is the screenshot for this variable for CMSDB: Once this is done, OneOps sets up automated jobs for database backups. These jobs are actually shell scripts which are wrappers over chef recipes for the database snapshot backup. Summary In this article we saw a few steps that will help you manage both kinds of installations. However as DevOps you will have to manage not only assemblies but the OneOps system itself. Depending on the size of the organization and the complexity of the deployments you handle, you may opt to choose either a single server installation or an enterprise install. Resources for Article: Further resources on this subject: Managing Application Configuration [article] Let's start with Extending Docker [article] Directory Services [article]
Read more
  • 0
  • 0
  • 2466

article-image-simple-todo-list-web-application-nodejs-express-and-riot
Pedro NarcisoGarcíaRevington
07 Nov 2016
10 min read
Save for later

Simple ToDo list web application with node.js, Express, and Riot

Pedro NarcisoGarcíaRevington
07 Nov 2016
10 min read
The frontend space is indeed crowded, but none of the more popular solutions are really convincing to me. I feel Angular is bloated and the double binding is not for me. I also do not like React and its syntax. Riot is, as stated by their creators, "A React-like user interface micro-library" with simpler syntax that is five times smaller than React. What we are going to learn We are going to build a simple Riot application backed by Express, using Jade as our template language. The backend will expose a simple REST API, which we will consume from the UI. We are not going to use any other dependency like JQuery, so this is also a good chance to try XMLHttpRequest2. I deliberately ommited the inclusion of a client package manager like webpack or jspm because I want to focus on the Expressjs + Riotjs. For the same reason, the application data is persisted in memory. Requirements You just need to have any recent version of node.js(+4), text editor of your choice, and some JS, Express and website development knowledge. Project layout Under my project directory we are going to have 3 directories: * public For assets like the riot.js library itself. * views Common in most Express setup, this is where we put the markup. * client This directory will host the Riot tags (we will see more of that later) We will also have the package.json, our project manifesto, and an app.js file, containing the Express application. Our Express server exposes a REST API; its code can be found in api.js. Here is how the layout of the final project looks: ├── api.js ├── app.js ├── client │ ├── tag-todo.jade │ └── tag-todo.js ├── package.json ├── node_modules ├── public │ └── js │ ├── client.js │ └── riot.js └── views └── index.jade Project setup Create your project directory and from there run the following to install the node.js dependencies: $ npm init -y $ npm install --save body-parser express jade And the application directories: $ mkdir -p views public/js client Start! Lets start by creating the Express application file, app.js: 'use strict'; const express = require('express'), app = express(), bodyParser = require('body-parser'); // Set the views directory and template engine app.set('views', __dirname + '/views'); app.set('view engine', 'jade'); // Set our static directory for public assets like client scripts app.use(express.static('public')); // Parses the body on incoming requests app.use(bodyParser.json()); // Pretty prints HTML output app.locals.pretty = true; // Define our main route, HTTP "GET /" which will print "hello" app.get('/', function (req, res) { res.send('hello'); }); // Start listening for connections app.listen(3000, function (err) { if (err) { console.error('Cannot listen at port 3000', err); } console.log('Todo app listening at port 3000'); }); The #app object we just created, is just a plain Express application. After setting up the application we call the listen function, which will create an HTTP server listening at port 3000. To test our application setup, we open another terminal, cd to our project directory and run $ node app.js. Open a web browser and load http://localhost:3000; can you read "hello"? Node.js will not reload the site if you change the files, so I recommend you to install nodemon. Nodemon monitors your code and reloads the site on every change you perform on the JS source code. The command,$ npm install -g nodemon, installs the program on your computer globally, so you can run it from any directory. Okay, kill our previously created server and start a new one with $ nodemon app.js. Our first Riot tag Riot allows you to encapsulate your UI logic in "custom tags". Tag syntax is pretty straightforward. Judge for yourself. <employee> <span>{ name }</span> </employee> Custom tags can contain code and can be nested as showed in the next code snippet: <employeeList> <employee each="{ items }" onclick={ gotoEmployee } /> <script> gotoEmployee (e) { var item = e.item; // do something } </script> </employeeList> This mechanism enables you to build complex functionality from simple units. Of course you can find more information at their documentation. On the next steps we will create our first tag: ./client/tag-todo.jade. Oh, we have not yet downloaded Riot! Here is the non minified Riot + compiler download. Download it to ./public/js/riot.js. Next step is to create our index view and tell our app to serve it. Locate / router handler, remove the res.send('hello) ', and update to: // Define our main route, HTTP "GET /" which will print "hello" app.get('/', function (req, res) { res.render('index'); }); Now, create the ./views/index.jade file: doctype html html head script(src="/js/riot.js") body h1 ToDo App todo Go to your browser and reload the page. You can read the big "ToDo App" but nothing else. There is a <todo></todo> tag there, but since the browser does not understand, this tag is not rendered. Let's tell Riot to mount the tag. Mount means Riot will use <todo></todo> as a placeholder for our —not yet there— todo tag. doctype html html head script(src="/js/riot.js") body h1 ToDo App script(type="riot/tag" src="/tags/todo.tag") todo script. riot.mount('todo'); Open your browser's dev console and reload the page. riot.mount failed because there was no todo.tag. Tags can be served in many ways, but I choose to serve them as regular Express templates. Of course, you can serve it as static assets or bundled. Just below the / route handler, add the /tags/:name.tag handler. // "/" route handler app.get('/', function (req, res) { res.render('index'); }); // tag route handler app.get('/tags/:name.tag', function (req, res) { var name = 'tag-' + req.params.name; res.render('../client/' + name); }); Now create the tag in ./client/tag-todo.jade: todo form(onsubmit="{ add }") input(type="text", placeholder="Needs to be done", name="todo") And reload the browser again. Errors gone and a new form in your browser. onsubmit="{ add }" is part of Riot's syntax and means on submit call the add function. You can add mix implementation with the markup, but I rather prefer to split markup from code. In Jade (and any other template language),it is trivial to include other files, which is exactly what we are going to do. Update the file as: todo form(onsubmit="{ add }") input(type="text", placeholder="Needs to be done", name="todo") script include tag-todo.js And create ./client/tag-todo.js with this snippet: 'use strict'; var self = this; var api = self.opts; When the tag gets mounted by Riot, it gets a context. That is the reason for var self = this;. That context can include the opts object. opts object can be anything of your choice, defined at the time you mount the tag. Let’s say we have an API object and we pass it to riot.mount as the second option at the time we mount the tag, that isriot.mount('todo', api). Then, at the time the tag is rendered this.opts will point to the api object. This is the mechanism we are going to use to expose our client api with the todo tag. Our form is still waiting for the add function, so edit the tag-todo.js again and append the following: self.add = function (e) { var title = self.todo.value; console.log('New ToDo', title); }; Reload the page, type something at the text field, and hit enter. The expected message should appear in your browser's dev console. Implementing our REST API We are ready to implement our REST API on the Express side. Create ./api.js file and add: 'use strict'; const express = require('express'); var app = module.exports = express(); // simple in memory DB var db = []; // handle ToDo creation app.post('/', function (req, res) { db.push({ title: req.body.title, done: false }); let todoID = db.length - 1; // mountpath = /api/todos/ res.location(app.mountpath + todoID); res.status(201).end(); }); // handle ToDo updates app.put('/', function (req, res) { db[req.body.id] = req.body; res.location('/' + req.body.id); res.status(204).end(); }); Our API supports ToDo creation/update, and it is architected as an Express sub application. To mount it, we just need to update app.js for the last time. Update the require block at app.js to: const express = require('express'), api = require('./api'), app = express(), bodyParser = require('body-parser'); ... And mount the api sub application just before the app.listen... // Mount the api sub application app.use('/api/todos/', api); We said we will implement a client for our API. It should expose two functions –create and update –located at ./public/client.js. Here is its source: 'use strict'; (function (api) { var url = '/api/todos/'; function extractIDFromResponse(xhr) { var location = xhr.getResponseHeader('location'); var result = +location.slice(url.length); return result; } api.create = function createToDo(title, callback) { var xhr = new XMLHttpRequest(); var todo = { title: title, done: false }; xhr.open('POST', url); xhr.setRequestHeader('Content-Type', 'application/json'); xhr.onload = function () { if (xhr.status === 201) { todo.id = extractIDFromResponse(xhr); } return callback(null, xhr, todo); }; xhr.send(JSON.stringify(todo)); }; api.update = function createToDo(todo, callback) { var xhr = new XMLHttpRequest(); xhr.open('PUT', url); xhr.setRequestHeader('Content-Type', 'application/json'); xhr.onload = function () { if (xhr.status === 200) { console.log('200'); } return callback(null, xhr, todo); }; xhr.send(JSON.stringify(todo)); }; })(this.todoAPI = {}); Okay, time to load the API client into the UI and share it with our tag. Modify the index view including it as a dependency: doctype html html head script(src="/js/riot.js") body h1 ToDo App script(type="riot/tag" src="/tags/todo.tag") script(src="/js/client.js") todo script. riot.mount('todo', todoAPI); We are now loading the API client and passing it as a reference to the todo tag. Our last change today is to update the add function to consume the API. Reload the browser again, type something into the textbox, and hit enter. Nothing new happens because our add function is not yet using the API. We need to update ./client/tag-todo.js as: 'use strict'; var self = this; var api = self.opts; self.items = []; self.add = function (e) { var title = self.todo.value; api.create(title, function (err, xhr, todo) { if (xhr.status === 201) { self.todo.value = ''; self.items.push(todo); self.update(); } }); }; We have augmented self with an array of items. Everytime we create a new ToDo task (after we get the 201 code from the server) we push that new ToDo object into the array because we are going to print that list of items. In Riot, we can loop the items adding each attribute to any tag. Last, update to ./client/tag-todo.jade todo form(onsubmit="{ add }") input(type="text", placeholder="Needs to be done", name="todo") ul li(each="{items}") span {title} script include tag-todo.js Finally! Reload the page and create a ToDo! Next steps You can find the complete source code for this article here. The final version of the code also implements a done/undone button, which you can try to implement by yourself. About the author Pedro NarcisoGarcíaRevington is a Senior Full Stack Developer with 10+ years experience in high scalability and availability, microservices, automated deployments, data processing, CI, (T,B,D)DD and polyglot persistence.
Read more
  • 0
  • 0
  • 11725
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-benefits-and-components-docker
Packt
04 Nov 2016
19 min read
Save for later

Benefits and Components of Docker

Packt
04 Nov 2016
19 min read
In this article by Jaroslaw Krochmalski, author of Developing with Docker, at the beginning, Docker was created as an internal tool by a Platform as a Service company, called dotCloud. Later on, in March 2013, it was released as open source. Apart from the Docker Inc. team, which is the main sponsor, there are some other big names contributing to the tool –Red Hat, IBM, Microsoft, Google, and Cisco Systems, just to name a few. Software development today needs to be agile and react quickly to the changes. We use methodologies such as Scrum, estimate our work in story points and attend the daily stand-ups. But what about preparing our software for shipment and the deployment? Let's see how Docker fits into that scenario and can help us being agile. (For more resources related to this topic, see here.) In this article, we will cover the following topics: The basic idea behind Docker A difference between virtualization and containerization Benefits of using Docker Components available to install We will begin with a basic idea behind this wonderful tool. The basic idea The basic idea behind Docker is to pack an application with all of its dependencies (let it be binaries, libraries, configuration files, scripts, jars, and so on) into a single, standardized unit for software development and deployment. Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, and system libraries—anything you can install on a server. This guarantees that it will always run in the same way, no matter what environment it will be deployed in. With Docker, you can build some Node.js or Java project (but you are of course not limited to those two) without having to install Node.js or Java on your host machine. Once you're done with it, you can just destroy the Docker image and it's as though nothing ever happened. It's not a programming language or a framework, rather think of it as about a tool that helps solving the common problems such as installing, distributing, and managing the software. It allows programmers and DevOps to build, ship, and run their code anywhere. You can think that maybe Docker is a virtualization engine, but it's far from it as we will explain in a while. Containerization versus virtualization To fully understand what Docker really is, first we need to understand the difference between traditional virtualization and containerization. Let's compare those two technologies now. Traditional virtualization A traditional virtual machine, which represents the hardware-level virtualization, is basically a complete operating system running on top of the host operating system. There are two types of virtualization hypervisors: Type 1 and Type 2. Type 1 hypervisors provide server virtualization on bare metal hardware—there is no traditional, end user's operating system. Type 2 hypervisors, on the other hand, are commonly used as a desktop virtualization—you run the virtualization engine on top of your own operating system. There can be a lot of use cases that would make an advantage from using virtualization—the biggest asset is that you can run many virtual machines with totally different operating systems on a single host. Virtual machines are fully isolated, hence very secure. But nothing comes without a price. There are many drawbacks—they contain all the features that an operating system needs to have: device drivers, core system libraries, and so on. They are heavyweight, usually resource-hungry and not so easy to set up—virtual machines require full installation. They require more computing resources to execute. To successfully run an application on a virtual machine, the hypervisor needs to first import the virtual machine and then power it up, and this takes time. Furthermore, their performance gets substantially degraded. As a result, only a few virtual machines can be provisioned and made available to work on a single machine. Containerization The Docker software runs in an isolated environment called a Docker container. A Docker container is not a virtual machine in the popular sense. It represents operating system virtualization. While each virtual machine image runs on an independent guest OS, the Docker images run within the same operating system kernel. A container has its own filesystem and environment variables. It's self-sufficient. Because of the containers run within the same kernel, they utilize fewer system resources. The base container can be, and usually is, very lightweight. Worth knowing is that Docker containers are isolated not only from the underlying OS, but from each other as well. There is no overhead related to a classic virtualization hypervisor and a guest operating system. This allows achieving almost bare metal, near native performance. The boot time of a dockerized application is usually very fast due to the low overhead of containers. It is also possible to speed up the roll-out of hundreds of application containers in seconds and to reduce the time taken for provisioning your software. As you can see, Docker is quite different from the traditional virtualization engines. Be aware that containers cannot substitute virtual machines for all use cases. A thoughtful evaluation is still required to determine what is best for your application. Both solutions have their advantages. On one hand we have the fully isolated, secure virtual machine with average performance and on the other hand, we have the containers that are missing some of the key features (such as total isolation), but are equipped with high performance that can be provisioned swiftly. Let's see what other benefits you will get when using Docker containerization. Benefits of using Docker When comparing the Docker containers with traditional virtual machines, we have mentioned some of its advantages. Let's summarize them now in more detail and add some more. Speed and size As we have said before, the first visible benefit of using Docker will be very satisfactory performance and short provisioning time. You can create or destroy containers quickly and easily. Docker shares only the Kernel, nothing less, nothing more. However, it reuses the image layers on which the specific image is built upon. Because of that, multiple versions of an application running in containers will be very lightweight. The result is faster deployment, easier migration, and nimble boot times. Reproducible and portable builds Using Docker enables you to deploy ready-to-run software, which is portable and extremely easy to distribute. Your containerized application simply runs within its container, there's no need for traditional installation. The key advantage of a Docker image is, it is bundled with all the dependency the containerized application needs to run. The lack of installation of dependencies process has a huge advantage. This eliminates problems such as software and library conflicts or even driver compatibility issues. Because of Docker's reproducible build environment, it's particularly well suited for testing, especially in your continuous integration flow. You can quickly boot up identical environments to run the tests. And because the container images are all identical each time, you can distribute the workload and run tests in parallel without a problem. Developers can run the same image on their machine that will be run in production later, which again has a huge advantage in testing. The use of Docker containers speeds up continuous integration. There are no more endless build-test-deploy cycles. Docker containers ensure that applications run identically in development, test, and production environments. One of Docker's greatest features is the portability. Docker containers are portable – they can be run from anywhere: your local machine, a nearby or distant server, and private or public cloud. When speaking about the cloud, all major cloud computing providers, such as Amazon Web Services, Google's Compute Platform have perceived Docker's availability and now support it. Docker containers can be run inside an Amazon EC2 instance, Google Compute Engine instance provided that the host operating system supports Docker. A container running on an Amazon EC2 instance can easily be transferred to some other environment, achieving the same consistency and functionality. Docker works very well with various other IaaS (Infrastructure-as-a-Service) providers such as Microsoft's Azure, IBM SoftLayer, or OpenStack. This additional level of abstraction from your infrastructure layer is an indispensable feature. You can just develop your software without worrying about the platform it will be run later on. It's truly a write once run everywhere solution. Immutable and agile infrastructure Maintaining a truly idempotent configuration management code base can be tricky and a time-consuming process. The code grows over time and becomes more and more troublesome. That's why the idea of an immutable infrastructure becomes more and more popular nowadays. Containerization comes to the rescue. By using containers during the process of development and deployment of your applications, you can simplify the process. Having a lightweight Docker server that needs almost no configuration management, you manage your applications simply by deploying and redeploying containers to the server. And again, because the containers are very lightweight, it takes only seconds of your time. As a starting point, you can download a prebuilt Docker image from the Docker Hub, which is like a repository of ready-to-use images. There are many choices of web servers, runtime platforms, databases, messaging servers and so on. It's like a real gold mine of software you can use for free to get a base foundation for your own project. The effect of the immutability of Docker's images is the result of a way they are being created. Docker makes use of a special file called a Dockerfile. This file contains all the setup instructions on how to create an image, like must-have components, libraries, exposed shared directories, network configuration, and so on. An image can be very basic, containing nothing but the operating system foundations, or—something that is more common—containing a whole prebuilt technology stack which is ready to launch. You can create images by hand, but it can be an automated process also. Docker creates images in a layered fashion: every feature you include will be added as another layer in the base image. This is another serious speed boost compared to the traditional virtualization techniques. Tools and APIs Of course, Docker is not just a Dockerfile processor and the runtime engine. It's a complete package with wide selection of tools and APIs that are helpful during the developer's and DevOp's daily work. First of all, there's The Docker Toolbox, which is an installer to quickly and easily install and setup a Docker environment on your own machine. The Kinematic is desktop developer environment for using Docker on Windows and Mac OS X. Docker distribution also contains a whole bunch of command-line tools. Let's look at them now. Tools overview On Windows, depending on the Windows version you use, there are two choices. It can be either Docker for Windows if you are on Windows 10 or later, or Docker Toolbox for all earlier versions of Windows. The same applies to MacOS. The newest offering is Docker for Mac, which runs as a native Mac application and uses xhyve to virtualize the Docker Engine environment and Linux kernel. For earlier version of Mac that doesn't meet the Docker for Mac requirements you should pick the Docker Toolbox for Mac. The idea behind Docker Toolbox and Docker native applications are the same—to virtualize Linux kernel and Docker engine on top of your operating system, we will be using Docker Toolbox, as it is more universal; it will run in all Windows and MacOS versions. The installation package for Windows and Mac OS is wrapped in an executable called the Docker Toolbox. The package contains all the tools you need to begin working with Docker. Of course there are tons of additional third-party utilities compatible with Docker, some of them very useful. But for now, let's focus on the default toolset. Before we start the installation, let's look at the tools that the installer package contains to better understand what changes will be made to your system. Docker Engine and Docker Engine client Docker is a client-server application. It consists of the daemon that does the important job: builds and downloads images, starts and stops containers and so on. It exposes a REST API that specifies interfaces for interacting with the daemon and is being used for remote management. Docker Engine accepts Docker commands from the command line, such as docker to run the image, docker ps to list running containers, docker images to list images, and so on. The Docker client is a command-line program that is being used to manage Docker hosts running Linux containers. It communicates with the Docker server using the REST API wrapper. You will interact with Docker by using the client to send commands to the server. Docker Engine works only on Linux. If you want run Docker on Windows or Mac OS, or want to provision multiple Docker hosts on a network or in the Cloud, you will then need the Docker Machine. Docker Machine Docker Machine is a fairly new command-line tool created by Docker team to manage Docker servers you can deploy containers to. It deprecated the old way of installing Docker with the Boot2Docker utility. The Docker Machine eliminates the need to create virtual machines manually and install Docker before starting Docker containers on them. It handles the provisioning and installation process for you behind the scenes. In other words, it's a quick way to get a new virtual machine provisioned and ready to run Docker containers. This is an indispensable tool when developing PaaS (Platform as a Service) architecture. Docker Machine not only creates a new VM with the Docker Engine installed in it, but sets up the certificate files for authentication and then configures the Docker client to talk to it. For flexibility purposes, the Docker Machine introduces the concept of drivers. Using drivers, Docker is able to communicate with various virtualization software and cloud providers. In fact, when you install Docker for Windows or Mac OS, the default VirtualBox driver will be used. The following command will be executed behind the scenes: docker-machine create --driver=virtualbox default Another available driver is amazonec2 for Amazon Web Services. It can be used to install Docker on the Amazon's cloud—we will do it later in this article. There are a lot of drivers ready to be used, and more are coming all the time. The list of existing official drivers with their documentation is always available at the Docker Drivers website: https://docs.docker.com/machine/drivers. The list contains the following drivers at the moment: Amazon Web Services Microsoft Azure Digital Ocean Exoscale Google Compute Engine Generic Microsoft Hyper-V OpenStack Rackspace IBM Softlayer Oracle VirtualBox VMware vCloud Air VMware Fusion VMware vSphere Apart from these, there is also a lot of third-party driver plugins available freely on the Internet sites such as GitHub. You can find additional drivers for different cloud providers and virtualization platforms, such as OVH Cloud or Parallels for Mac OS, for example, you are not limited to Amazon's AWS or Oracle's VirtualBox. As you can see, the choice is very broad. If you cannot find a specific driver for your Cloud provider, try looking for it on the GitHub. When installing the Docker Toolbox on Windows or Mac OS, Docker Machine will be selected by default. It's mandatory and currently the only way to run Docker on these operating systems. Installing the Docker Machine is not obligatory for Linux—there is no need to virtualize the Linux kernel there. However, if you want to deal with the Cloud providers or just want to have common runtime environment portable between Mac OS, Windows, and Linux, you can install Docker Machine for Linux as well. We will describe the process later in this article. Machine will be also used behind the scenes when using the graphical tool Kitematic, which we will present in a while. After the installation process, Docker Machine will be available as a command-line tool: docker-machine. Kitematic Kitematic is the software tool you can use to run containers through a plain, yet robust graphical user interface. In 2015, Docker acquired the Kitematic team, expecting to attract many more developers and hoping to open up the containerization solution to more developers and a wider, more general public. Kitematic is now included by default when installing Docker Toolbox on Mac OS and MS Windows. You can use it to comfortably search and fetch images you need from the Docker Hub. The tool can also be used to run your own app containers. Using the GUI, you can edit environment variables, map ports, configure volumes, study logs, and have command-line access to the containers. It is worth mentioning that you can seamlessly switch between Kitematic GUI and command-line interface to run and manage application containers. Kitematic is very convenient, however, if you want to have more control when dealing with the containers or just want to use scripting - the command line will be a better solution. In fact, Kitematic allows you to switch back and forth between the Docker CLI and the graphical interface. Any changes you make on the command-line interface will be directly reflected in Kitematic. The tool is simple to use, as you will see at the end of this article, when we are going to test our setup on Mac or Windows PC, we will be using the command-line interface for working with Docker. Docker compose Compose is a tool, executed from the command line as docker-compose. It replaces the old fig utility. It's used to define and run multicontainer Docker applications. Although it's very easy to imagine a multi-container application (like a web server in one container and a database in the other), it's not mandatory. So if you decide that your application will fit in a single Docker container, there will be no use for docker-compose. In real life, it's very likely that your application will span into multiple containers. With docker-compose, you use a compose file to configure your application's services, so they can be run together in an isolated environment. Then, using a single command, you create and start all the services from your configuration. When it comes to multicontainer applications, docker-compose is great for development and testing, as well as continuous integration workflows. Oracle VirtualBox Oracle VM VirtualBox is a free and open source hypervisor for x86 computers from Oracle. It will be installed by default when installing the Docker Toolbox. It supports the creation and management of virtual machines running Windows, Linux, BSD, OS/2, Solaris, and so on. In our case, the Docker Machine using VirtualBox driver, will use VirtualBox to create and boot a bitsy Linux distribution capable of the running docker-engine. It's worth mentioning that you can also run the teensy–weensy virtualized Linux straight from the VirtualBox itself. Every Docker Machine you have created using the docker-machine or Kitematic, will be visible and available to boot in the VirtualBox, when you run it directly, as shown in the following screenshot: You can start, stop, reset, change settings, and read logs in the same way as for other virtualized operating systems. You can use VirtualBox in Windows or Mac for other purposes than Docker. Git Git is a distributed version control system that is widely used for software development and other version control tasks. It has emphasis on speed, data integrity, and support for distributed, non-linear workflows. Docker Machine and Docker client follows the pull/push model of Git for fetching the needed dependencies from the network. For example, if you decide to run the Docker image which is not present on your local machine, Docker will fetch this image from the Docker Hub. Docker doesn't internally use Git for any kind of resource versioning. It does, however, rely on hashing to uniquely identify the filesystem layers which is very similar to what Git does. Docker also takes initial inspiration in the notion of commits, pushes, and pulls. Git is also included in the Docker Toolbox installation package. From a developer's perspective, there are tools especially useful in a programmer's daily job, be it IntelliJ IDEA Docker Integration Plugin for Java fans or Visual Studio 2015 Tools for Docker for those who prefer C#. They let you download and build Docker images, create and start containers, and carry out other related tasks straight from your favorite IDE. Apart from the tools included in the Docker's distribution package (it will be Docker Toolbox for older versions of Windows or Docker for Windows and Docker for Mac), there are hundreds of third-party tools, such as Kubernetes and Helios (for Docker orchestration), Prometheus (for monitoring of statistics) or Swarm and Shipyard for managing clusters. As Docker captures higher attention, more and more Docker-related tools pop-up almost every week. But these are not the only tools available for you. Additionally, Docker provides a set of APIs that can be very handy. One of them is the Remote API for the management of the images and containers. Using this API, you will be able to distribute your images to the runtime Docker engine. The container can be shifted to a different machine that runs Docker, and executed there without compatibility concerns. This may be especially useful when creating PaaS (Platform-as-a-Service) architectures. There's also the Stats API that will expose live resource usage information (such as CPU, memory, network I/O and block I/O) for your containers. This API endpoint can be used to create tools that show how your containers behave, for example, on a production system. Summary By now we understand the difference between the virtualization and containerization and also, I hope, we can see the advantages of using the latter. We also know what components are available for us to install and use. Let's begin our journey to the world of containers and go straight to the action by installing the software. Resources for Article: Further resources on this subject: Let's start with Extending Docker [article] Docker Hosts [article] Understanding Docker [article]
Read more
  • 0
  • 0
  • 26013

article-image-getting-started-aspnet-core-and-bootstrap-4
Packt
04 Nov 2016
17 min read
Save for later

Getting Started with ASP.NET Core and Bootstrap 4

Packt
04 Nov 2016
17 min read
This article is by Pieter van der Westhuizen, author of the book Bootstrap for ASP.NET MVC - Second edition. As developers, we can find it difficult to create great-looking user interfaces from scratch when using HTML and CSS. This is especially hard when developers have extensive experience developing Windows Forms applications. Microsoft introduced Web Forms to remove the complexities of building websites and to ease the switch from Windows Forms to the Web. This in turn makes it very hard for Web Forms developers, and even harder for Windows Forms developers to switch to ASP.NET MVC. Bootstrap is a set of stylized components, plugins, and a layout grid that takes care of the heavy lifting. Microsoft included Bootstrap in all ASP.NET MVC project templates since 2013. In this article, we will cover the following topics: Files included in the Bootstrap distribution How to create an empty ASP.NET site and enable MVC and static files Adding the Bootstrap files using Bower Automatically compile the Bootstrap Sass files using Gulp Installing additional icon sets How to create a layout file that references the Bootstrap files (For more resources related to this topic, see here.) Files included in the Bootstrap distribution In order to get acquainted with the files inside the Bootstrap distribution, you need to download its source files. At the time of writing, Bootstrap 4 was still in Alpha, and its source files can be downloaded from http://v4-alpha.getbootstrap.com. Bootstrap style sheets Do not be alarmed by the amount of files inside the css folder. This folder contains four .css files and two .map files. We only need to include the bootstrap.css file in our project for the Bootstrap styles to be applied to our pages. The bootstrap.min.css file is simply a minified version of the aforementioned file. The .map files can be ignored for the project we'll be creating. These files are used as a type of debug symbol (similar to the .pdb files in Visual Studio), which allows developers to live edit their preprocessor source files. Bootstrap JavaScript files The js folder contains two files. All the Bootstrap plugins are contained in the bootstrap.js file. The bootstrap.min.js file is simply a minified version of the aforementioned file. Before including the file in your project, make sure that you have a reference to the jQuery library because all Bootstrap plugins require jQuery. Bootstrap fonts/icons Bootstrap 3 uses Glyphicons to display various icons and glyphs in Bootstrap sites. Bootstrap 4 will no longer ship with glyphicons included, but you still have the option to include it manually or to include your own icons. The following two icon sets are good alternatives to Glyphicons: Font Awesome, available from http://fontawesome.io/ GitHub's Octicons, available from https://octicons.github.com/ Bootstrap source files Before you can get started with Bootstrap, you first need to download the Bootstrap source files. At the time of writing, Bootstrap 4 was at version 4 Alpha 3. You have a few choices when adding Bootstrap to you project. You can download the compiled CSS and JavaScript files or you can use a number of package managers to install the Bootstrap Sass source to your project. In this article, you'll be using Bower to add the Bootstrap 4 source files to your project. For a complete list of Bootstrap 4 Alpha installation sources, visit http://v4-alpha.getbootstrap.com/getting-started/download/ CSS pre-processors CSS pre-processors process code written in a pre-processed language, such as LESS or Sass, and convert it into standard CSS, which in turn can be interpreted by any standard web browser. CSS pre-processors extend CSS by adding features that allow variables, mixins, and functions. The benefits of using CSS pre-processors are that they are not bound by any limitations of CSS. CSS pre-processors can give you more functionality and control over your style sheets and allows you to write more maintainable, flexible, and extendable CSS. CSS pre-processors can also help to reduce the amount of CSS and assist with the management of large and complex style sheets that can become harder to maintain as the size and complexity increases. In essence, CSS pre-processors such as Less and Sass enables programmatic control over your style sheets. Bootstrap moved their source files from Less to Sass with version 4.  Less and Sass are very alike in that they share a similar syntax as well as features such as variables, mixins, partials, and nesting, to name but a few. Less was influenced by Sass, and later on, Sass was influenced by Less when it adopted CSS-like block formatting, which worked very well for Less. Creating an empty ASP.NET MVC site and adding Bootstrap manually The default ASP.NET 5 project template in Visual Studio 2015 Update 3 currently adds Bootstrap 3 to the project. In order to use Bootstrap 4 in your ASP.NET project, you'll need to create an empty ASP.NET project and add the Bootstrap 4 files manually. To create a project that uses Bootstrap 4, complete the following process: In Visual Studio 2015, select New | Project from the File menu or use the keyboard shortcut Ctrl + Shift + N. From the New Project dialog window, select ASP.NET Core Web Application (.NET Core), which you'll find under Templates | Visual C# | Web. Select the Empty project template from the New ASP.NET Core Web Application (.NET Core) Project dialog window and click on OK. Enabling MVC and static files The previous steps will create a blank ASP.NET Core project. Running the project as-is will only show a simple Hello World output in your browser. In order for it to serve static files and enable MVC, we'll need to complete the following steps: Double-click on the project.json file inside the Solution Explorer in Visual Studio. Add the following two lines to the dependencies section, and save the project.json file: "Microsoft.AspNetCore.Mvc": "1.0.0", "Microsoft.AspNetCore.StaticFiles": "1.0.0" You should see a yellow colored notification inside the Visual Studio Solution Explorer with a message stating that it is busy restoring packages. Open the Startup.cs file. To enable MVC for the project, change the ConfigureServices method to the following: public void ConfigureServices(IServiceCollection services) {     services.AddMvc(); } Finally, update the Configure method to the following code: public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) {     loggerFactory.AddConsole();       if (env.IsDevelopment())     {         app.UseDeveloperExceptionPage();     }       app.UseStaticFiles();       app.UseMvc(routes =>     {         routes.MapRoute(             name: "default",             template: "{controller=Home}/{action=Index}/{id?}");     }); } The preceding code will enable logging and the serving of static files such as images, style sheets, and JavaScript files. It will also set the default MVC route. Creating the default route controller and view When creating an empty ASP.NET Core project, no default controller or views will be created by default. In the previous steps, we've created a default route to the Index action of the Home controller. In order for this to work, we first need to complete the following steps: In the Visual Studio Solution Explorer, right-click on the project name and select Add | New Folder from the context menu. Name the new folder Controllers. Add another folder called Views. Right-click on the Controllers folder and select Add | New Item… from the context menu. Select MVC Controller Class from the Add New Item dialog, located under .NET Core | ASP.NET, and click on Add. The default name when adding a new controller will be HomeController.cs: Next, we'll need to add a subfolder for the HomeController in the Views folder. Right-click on the Views folder and select Add | New Folder from the context menu. Name the new folder Home. Right-click on the newly created Home folder and select Add | New Item… from the context menu. Select the MVC View Page item, located under .NET Core | ASP.NET; from the list, make sure the filename is Index.cshtml and click on the Add button: Your project layout should resemble the following in the Visual Studio Solution Explorer: Adding the Bootstrap 4 files using Bower With ASP.NET 5 and Visual Studio 2015, Microsoft provided the ability to use Bower as a client-side package manager. Bower is a package manager for web frameworks and libraries that is already very popular in the web development community. You can read more about Bower and search the packages it provides by visiting http://bower.io/ Microsoft's decision to allow the use of Bower and package managers other than NuGet for client-side dependencies is because it already has such a rich ecosystem. Do not fear! NuGet is not going away. You can still use NuGet to install libraries and components, including Bootstrap 4! To add the Bootstrap 4 source files to your project, you need to follow these steps: Right-click on the project name inside Visual Studio's Solution Explorer and select Add | New Item…. Under .NET Core | Client-side, select the Bower Configuration File item, make sure the filename is bower.json and click on Add, as shown here: If not already open, double-click on the bower.json file to open it and add Bootstrap 4 to the dependencies array. The code for the file should look similar to the following: {    "name": "asp.net",    "private": true,   "dependencies": {     "bootstrap": "v4.0.0-alpha.3"   } } Save the bower.json file. Once you've saved the bower.json file, Visual Studio will automatically download the dependencies into the wwwroot/lib folder of your project. In the case of Bootstrap 4 it also depends on jQuery and Tether, you'll notice that jQuery and Tether has also been downloaded as part of the Bootstrap dependency. After you've added Bootstrap to your project, your project layout should look similar to the following screenshot: Compiling the Bootstrap Sass files using Gulp When adding Bootstrap 4, you'll notice that the bootstrap folder contains a subfolder called dist. Inside the dist folder, there are ready-to-use Bootstrap CSS and JavaScript files that you can use as-is if you do not want to change any of the default Bootstrap colours or properties. However, because the source Sass files were also added, this gives you extra flexibility in customizing the look and feel of your web application. For instance, the default colour of the base Bootstrap distribution is gray; if you want to change all the default colours to shades of blue, it would be tedious work to find and replace all references to the colours in the CSS file. For example, if you open the _variables.scss file, located in wwwroot/lib/bootstrap/scss, you'll notice the following code: $gray-dark:                 #373a3c !default; $gray:                      #55595c !default; $gray-light:                #818a91 !default; $gray-lighter:              #eceeef !default; $gray-lightest:             #f7f7f9 !default; We're not going to go into too much detail regarding Sass in this article, but the $ in front of the names in the code above indicates that these are variables used to compile the final CSS file. In essence, changing the values of these variables will change the colors to the new values we've specified, when the Sass file is compiled. To learn more about Sass, head over to http://sass-lang.com/ Adding Gulp npm packages We'll need to add the gulp and gulp-sass Node packages to our solution in order to be able to perform actions using Gulp. To accomplish this, you will need to use npm. npm is the default package manager for the Node.js runtime environment. You can read more about it at https://www.npmjs.com/ To add the gulp and gulp-sass npm packages to your ASP.NET project, complete the following steps: Right-click on your project name inside the Visual Studio Solution Explorer and select Add | New Item… from the project context menu. Find the npm Configuration File item, located under .NET Core | Client-side. Keep its name as package.json and click on Add. If not already open, double-click on the newly added package.json file and add the following two dependencies to the devDependencies array inside the file: "devDependencies": {   "gulp": "3.9.1",   "gulp-sass": "2.3.2" } This will add version 3.9.1 of the gulp package and version 2.3.2 of the gulp-sass package to your project. At the time of writing, these were the latest versions. Your version numbers might differ. Enabling Gulp-Sass compilation Visual Studio does not compile Sass files to CSS by default without installing extensions, but we can enable it using Gulp. Gulp is a JavaScript toolkit used to stream client-side code through a series of processes when an event is triggered during build. Gulp can be used to automate and simplify development and repetitive tasks, such as the following: Minify CSS, JavaScript and image files Rename files Combine CSS files Learn more about Gulp at http://gulpjs.com/ Before you can use Gulp to compile your Sass files to CSS, you need to complete the following tasks: Add a new Gulp Configuration File to your project by right-cing Add | New Item… from the context menu. The location of the item is .NET Core | Client-side. Keep the filename as gulpfile.js and click on the Add button. Change the code inside the gulpfile.js file to the following: var gulp = require('gulp'); var gulpSass = require('gulp-sass'); gulp.task('compile-sass', function () {     gulp.src('./wwwroot/lib/bootstrap/scss/bootstrap.scss')         .pipe(gulpSass())         .pipe(gulp.dest('./wwwroot/css')); }); The code in the preceding step first declares that we require the gulp and gulp-sass packages, and then creates a new task called compile-sass that will compile the Sass source file located at /wwwroot/lib/bootstrap/scss/bootstrap.scss and output the result to the /wwwroot/css folder. Running Gulp tasks With the gulpfile.js properly configured, you are now ready to run your first Gulp task to compile the Bootstrap Sass to CSS. Accomplish this by completing the following steps: Right-click on gulpfile.js in the Visual Studio Solution Explorer and choose Task Runner Explorer from the context menu. You should see all tasks declared in the gulpfile.js listed underneath the Tasks node. If you do not see tasks listed, click on the Refresh button, located on the left-hand side of the Task Runner Explorer window. To run the compile-sass task, right-click on it and select Run from the context menu. Gulp will compile the Bootstrap 4 Sass files and output the CSS to the specified folder. Binding Gulp tasks to Visual Studio events Right-clicking on every task in the Task Runner Explorer in order to execute each, could involve a lot of manual steps. Luckily, Visual Studio allows us to bind tasks to the following events inside Visual Studio: Before Build After Build Clean Project Open If, for example, we would like to compile the Bootstrap 4 Sass files before building our project, simply select Before Build from the Bindings context menu of the Visual Studio Task Runner Explorer: Visual Studio will add the following line of code to the top of gulpfile.js to tell the compiler to run the task before building the project: /// <binding BeforeBuild='compile-sass' /> Installing Font Awesome Bootstrap 4 no longer comes bundled with the Glyphicons icon set. However, there are a number of free alternatives available for use with your Bootstrap and other projects. Font Awesome is a very good alternative to Glyphicons that provides you with 650 icons to use and is free for commercial use. Learn more about Font Awesome by visiting https://fortawesome.github.io/Font-Awesome/ You can add a reference to Font Awesome manually, but since we already have everything set up in our project, the quickest option is to simply install Font Awesome using Bower and compile it to the Bootstrap style sheet using Gulp. To accomplish this, follow these steps: Open the bower.json file, which is located in your project route. If you do not see the file inside the Visual Studio Solution Explorer, click on the Show All Files button on the Solution Explorer toolbar. Add font-awesome as a dependency to the file. The complete listing of the bower.json file is as follows: {   "name": "asp.net",   "private": true,   "dependencies": {     "bootstrap": "v4.0.0-alpha.3",     "font-awesome": "4.6.3"   } } Visual Studio will download the Font Awesome source files and add a font-awesome subfolder to the wwwroot/lib/ folder inside your project. Copy the fonts folder located under wwwroot/font-awesome to the wwwroot folder. Next, open the bootstrap.scss file located in the wwwroot/lib/bootstrap/scss folder and add the following line at the end of the file: $fa-font-path: "/fonts"; @import "../../font-awesome/scss/font-awesome.scss"; Run the compile-sass task via the Task Runner Explorer to recompile the Bootstrap Sass. The preceding steps will include Font Awesome in your Bootstrap CSS file, which in turn will enable you to use it inside your project by including the mark-up demonstrated here: <i class="fa fa-pied-piper-alt"></i> Creating a MVC Layout page The final step for using Bootstrap 4 in your ASP.NET MVC project is to create a Layout page that will contain all the necessary CSS and JavaScript files in order to include Bootstrap components in your pages. To create a Layout page, follow these steps: Add a new sub folder called Shared to the Views folder. Add a new MVC View Layout Page to the Shared folder. The item can be found in the .NET Core | Server-side category of the Add New Item dialog. Name the file _Layout.cshtml and click on the Add button: With the current project layout, add the following HTML to the _Layout.cshtml file: <!DOCTYPE html> <html lang="en"> <head>     <meta charset="utf-8">     <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">     <meta http-equiv="x-ua-compatible" content="ie=edge">     <title>@ViewBag.Title</title>     <link rel="stylesheet" href="~/css/bootstrap.css" /> </head> <body>     @RenderBody()       <script src="~/lib/jquery/dist/jquery.js"></script>     <script src="~/lib/bootstrap/dist/js/bootstrap.js"></script> </body> </html> Finally, add a new MVC View Start Page to the Views folder called _ViewStart.cshtml. The _ViewStart.cshtml file is used to specify common code shared by all views. Add the following Razor mark-up to the _ViewStart.cshtml file: @{     Layout = "_Layout"; } In the preceding mark-up, a reference to the Bootstrap CSS file that was generated using the Sass source files and Gulp is added to the <head> element of the file. In the <body> tag, the @RenderBody method is invoked using Razor syntax. Finally, at the bottom of the file, just before the closing </body> tag, a reference to the jQuery library and the Bootstrap JavaScript file is added. Note that jQuery must always be referenced before the Bootstrap JavaScript file. Content Delivery Networks You could also reference the jQuery and Bootstrap library from a Content Delivery Network (CDN). This is a good approach to use when adding references to the most widely used JavaScript libraries. This should allow your site to load faster if the user has already visited a site that uses the same library from the same CDN, because the library will be cached in their browser. In order to reference the Bootstrap and jQuery libraries from a CDN, change the <script> tags to the following: <script src="https://code.jquery.com/jquery-3.1.0.js"></script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.2/js/bootstrap.min.js"></script> There are a number of CDNs available on the Internet; listed here are some of the more popular options: MaxCDN: https://www.maxcdn.com/ Google Hosted Libraries: https://developers.google.com/speed/libraries/ CloudFlare: https://www.cloudflare.com/ Amazon CloudFront: https://aws.amazon.com/cloudfront/ Learn more about Bootstrap Frontend development with Bootstrap 4
Read more
  • 0
  • 0
  • 42197

article-image-designing-games-swift
Packt
03 Nov 2016
16 min read
Save for later

Designing Games with Swift

Packt
03 Nov 2016
16 min read
In this article by Stephen Haney, the author of the book Swift 3 Game Development - Second Edition, we will see that apple's newest version of its flagship programming language, Swift 3, is the perfect choice for game developers. As it matures, Swift is realizing its opportunity to be something special, a revolutionary tool for app creators. Swift is the gateway for developers to create the next big game in the Apple ecosystem. We have only started to explore the wonderful potential of mobile gaming, and Swift is the modernization we need for our toolset. Swift is fast, safe, current, and attractive to developers coming from other languages. Whether you are new to the Apple world, or a seasoned veteran of Objective-C, I think you will enjoy making games with Swift. (For more resources related to this topic, see here.) Apple's website states the following: "Swift is a successor to the C and Objective-C languages." My goal is to guide you step-by-step through the creation of a 2D game for iPhones and iPads. We will start with installing the necessary software, work through each layer of game development, and ultimately publish our new game to the App Store. We will also have some fun along the way! We aim to create an endless flyer game featuring a magnificent flying penguin named Pierre. What is an endless flyer? Picture hit games like iCopter, Flappy Bird, Whale Trail, Jetpack Joyride, and many more—the list is quite long. Endless flyer games are popular on the App Store, and the genre necessitates that we cover many reusable components of 2D game design. I will show you how to modify our mechanics to create many different game styles. My hope is that our demo project will serve as a template for your own creative works. Before you know it, you will be publishing your own game ideas using the techniques we explore together. In this article, we will learn the following topics: Why you will love Swift What you will learn in this article New in Swift 3 Setting up your development environment Creating your first Swift game Why you will love Swift Swift, as a modern programming language, benefits from the collective experience of the programming community; it combines the best parts of other languages and avoids poor design decisions. Here are a few of my favorite Swift features: Beautiful syntax: Swift's syntax is modern and approachable, regardless of your existing programming experience. Apple balanced syntax with structure to make Swift concise and readable. Interoperability: Swift can plug directly into your existing projects and run side-by-side with your Objective-C code. Strong typing: Swift is a strongly typed language. This means the compiler will catch more bugs at compile time, instead of when your users are playing your game! The compiler will expect your variables to be of a certain type (int, string, and so on) and will throw a compile-time error if you try to assign a value of a different type. While this may seem rigid if you are coming from a weakly typed language, the added structure results in safer, more reliable code. Smart type inference: To make things easier, type inference will automatically detect the types of your variables and constants based upon their initial value. You do not need to explicitly declare a type for your variables. Swift is smart enough to infer variable types in most expressions. Automatic memory management: As the Apple Swift developer guide states, "memory management just works in Swift". Swift uses a method called Automatic Reference Counting (ARC) to manage your game's memory usage. Besides a few edge cases, you can rely on Swift to safely clean up and turn off the lights. An even playing field: One of my favorite things about Swift is how quickly the language is gaining mainstream adoption. We are all learning and growing together, and there is a tremendous opportunity to break new ground. Open source: From version 2.2 onwards, Apple made Swift open source, curetting it through the website www.swift.org, and launched a package manager with Swift 3. This is a welcome change as it fosters greater community involvement and a larger ecosystem of third party tools and add-ons. Eventually, we should see Swift migrate to new platforms. Prerequisites I will try to make this text easy to understand for all skill levels: I will assume you are brand new to Swift as a language Requires no prior game development experience, though it will help I will assume you have a fundamental understanding of common programming concepts What you will learn in this article You will be capable of creating and publishing your own iOS games. You will know how to combine the techniques we learned to create your own style of game, and you will be well prepared to dive into more advanced topics with a solid foundation in 2D game design. Embracing SpriteKit SpriteKit is Apple's 2D game development framework and your main tool for iOS game design. SpriteKit will handle the mechanics of our graphics rendering, physics, and sound playback. As far as game development frameworks go, SpriteKit is a terrific choice. It is built and supported by Apple and thus integrates perfectly with Xcode and iOS. You will learn to be highly proficient with SpriteKit as we will be using it exclusively in our demo game. We will learn to use SpriteKit to power the mechanics of our game in the following ways: Animate our player, enemies, and power-ups Paint and move side-scrolling environments Play sounds and music Apply physics-like gravity and impulses for movement Handle collisions between game objects Reacting to player input The control schemes in mobile games must be inventive. Mobile hardware forces us to simulate traditional controller inputs, such as directional pads and multiple buttons, on the screen. This takes up valuable visible area, and provides less precision and feedback than with physical devices. Many games operate with only a single input method: A single tap anywhere on the screen. We will learn how to make the best of mobile input, and explore new forms of control by sensing device motion and tilt. Structuring your game code It is important to write well-structured code that is easy to re-use and modify as your game design inevitably changes. You will often find mechanical improvements as you develop and test your games, and you will thank yourself for a clean working environment. Though there are many ways to approach this topic, we will explore some best practices to build an organized system with classes, protocols, inheritance, and composition. Building UI/menus/levels We will learn to switch between scenes in our game with a menu screen. We will cover the basics of user experience design and menu layout as we build our demo game. Integrating with Game Center Game Center is Apple's built-in social gaming network. Your game can tie into Game Center to store and share high scores and achievements. We will learn how to register for Game Center, tie it into our code, and create a fun achievement system. Maximizing fun If you are like me, you will have dozens of ideas for games floating around your head. Ideas come easily, but designing fun game play is difficult! It is common to find that your ideas need game play enhancements once you see your design in action. We will look at how to avoid dead-ends and see your project through to the finish line. Plus, I will share my tips and tricks to ensure your game will bring joy to your players. Crossing the finish line Creating a game is an experience you will treasure. Sharing your hard work will only sweeten the satisfaction. Once our game is polished and ready for public consumption, we will navigate the App Store submission process together. You will finish feeling confident in your ability to create games with Swift and bring them to market in the App Store. Monetizing your work Game development is a fun and rewarding process, even without compensation, but the potential exists to start a career, or side-job, selling games on the App Store. Successfully promoting and marketing your game is an important task. I will outline your options and start you down the path to monetization. New in Swift 3 The largest feature in Swift 3 is syntax compatibility and stability. Apple is trying to refine its young, shifting language into its final foundational shape. Each successive update of Swift has introduced breaking syntax changes that made older code incompatible with the newest version of Swift; this is very inconvenient for developers. Going forward, Swift 3 aims to reach maturity and maintain source compatibility with future releases of the language. Swift 3 also features the following:  A package manager that will help grow the ecosystem A more consistent, readable API that often results in less code for the same result Improved tooling and bug fixes in the IDE, Xcode Many small syntax improvements in consistency and clarity Swift has already made tremendous steps forward as a powerful, young language. Now Apple is working on polishing Swift into a mature, production-ready tool. The overall developer experience improves with Swift 3. Setting up your development environment Learning a new development environment can be a roadblock. Luckily, Apple provides some excellent tools for iOS developers. We will start our journey by installing Xcode. Introducing and installing Xcode Xcode is Apple's Integrated Development Environment (IDE). You will need Xcode to create your game projects, write and debug your code, and build your project for the App Store. Xcode also comes bundled with an iOS simulator to test your game on virtualized iPhones and iPads on your computer. Apple praises Xcode as "an incredibly productive environment for building amazing apps for Mac, iPhone, and iPad".   To install Xcode, search for Xcode in the AppStore, or visit http://developer.apple.com and select Developer and then Xcode. Swift is continually evolving, and each new Xcode release brings changes to Swift. If you run into errors because Swift has changed, you can always use Xcode's built-in syntax update tool. Simply use Xcode's Edit | Convert to Latest Syntax option to update your code. Xcode performs common IDE features to help you write better, faster code. If you have used IDEs in the past, then you are probably familiar with auto completion, live error highlighting, running and debugging a project, and using a project manager pane to create and organize your files. However, any new program can seem overwhelming at first. We will walk through some common interface functions over the next few pages. I have also found tutorial videos on YouTube to be particularly helpful if you are stuck. Most common search queries result in helpful videos. Creating our first Swift game Do you have Xcode installed? Let us see some game code in action in the simulator! We will start by creating a new project in Xcode. For our demo game, we will create a side-scrolling endless flyer featuring an astonishing flying penguin named Pierre. I am going to name this project Pierre Penguin Escapes the Antarctic, but feel free to name your project whatever you like. Follow these steps to create a new project in Xcode: Launch Xcode and navigate to File | New | Project. You will see a screen asking you to select a template for your new project. Select iOS | Application in the left pane, and Game in the right pane. It should look like this: Once you select Game, click Next. The following screen asks us to enter some basic information about our project. Don’t worry; we are almost at the fun bit. Fill in the Product Name field with the name of your game. Let us fill in the Team field. Do you have an active Apple developer account? If not, you can skip over the Team field for now. If you do, your Team is your developer account. Click Add Team and Xcode will open the accounts screen where you can log in. Enter your developer credentials as shown in the following screenshot: Once you're authenticated, you can close the accounts screen. Your developer account should appear in the Team dropdown. You will want to pick a meaningful Organization Name and Organization Identifier when you create your own games for publication. Your Organization Name is the name of your game development studio. For me, that's Joyful Games. By convention, your Organization Identifier should follow a reverse domain name style. I will use io.JoyfulGames since my website is JoyfulGames.io. After you fill out the name fields, be sure to select Swift for the Language, SpriteKit for Game Technology, and Universal for Devices. For now, uncheck Integrate GameplayKit, uncheck Include Unit Tests, uncheck Include UI Tests. We will not use these features in our demo game. Here are my final project settings: Click Next and you will see the final dialog box. Save your new project. Pick a location on your computer and click Next. And we are in! Xcode has pre-populated our project with a basic SpriteKit template. Navigating our project Now that we have created our project, you will see the project navigator on the left-hand side of Xcode. You will use the project navigator to add, remove, and rename files and generally organize your project. You might notice that Xcode has created quite a few files in our new project. We will take it slow; don’t feel that you have to know what each file does yet, but feel free to explore them if you are curious: Exploring the SpriteKit Demo Use the project navigator to open up the file named GameScene.swift. Xcode created GameScene.swift to store the default scene of our new game. What is a scene? SpriteKit uses the concept of scenes to encapsulate each unique area of a game. Think of the scenes in a movie; we will create a scene for the main menu, a scene for the Game Over screen, a scene for each level in our game, and so on. If you are on the main menu of a game and you tap Play, you move from the menu scene to the Level 1 scene. SpriteKit prepends its class names with the letters "SK"; consequently, the scene class is SKScene. You will see there is already some code in this scene. The SpriteKit project template comes with a very small demo. Let's take a quick look at this demo code and use it to test the iOS simulator. Please do not be concerned with understanding the demo code at this point. Your focus should be on learning the development environment. Look for the run toolbar at the top of the Xcode window. It should look something like the following: Select the iOS device of your choice to simulate using the dropdown on the far right. Which iOS device should you simulate? You are free to use the device of your choice. I will be using an iPhone 6 for the screenshots, so choose iPhone 6 if you want your results to match my images perfectly. Unfortunately, expect your game to play poorly in the simulator. SpriteKit suffers from poor FPS in the iOS simulator. Once our game becomes relatively complex, we will see our FPS drop, even on high-end computers. The simulator will get you through, but it is best if you can plug in a physical device to test. It is time for our first glimpse of SpriteKit in action! Press the gray play arrow in the toolbar (handy keyboard shortcut: command + r). Xcode will build the project and launch the simulator. The simulator starts in a new window, so make sure you bring it to the front. You should see a gray background with white text: Hello, World. Click around on the gray background. You will see colorful, spinning boxes spawning wherever you click: If you have made it this far, congratulations! You have successfully installed and configured everything you need to make your first Swift game. Once you have finished playing with the spinning squares, you can close the simulator down and return to Xcode. Note: You can use the keyboard command command + q to exit the simulator or press the stop button inside Xcode. If you use the stop button, the simulator will remain open and launch your next build faster. Examining the demo code Let's quickly explore the demo code. Do not worry about understanding everything just yet; we will cover each element in depth later. At this point, I am hoping you will acclimatize to the development environment and pick up a few things along the way. If you are stuck, keep going! Make sure you have GameScene.swift open in Xcode. The demo GameScene class implements some functions you will use in your games. Let’s examine these functions. Feel free to read the code inside each function, but I do not expect you to understand the specific code just yet. The game invokes the didMove function whenever it switches to the GameScene. You can think of it a bit like an initialize, or main, function for the scene. The SpriteKit demo uses it to draw the Hello, World text to the screen and set up the spinning square shape that shows up when we tap. There are seven functions involving touch which handle the user's touch input to the iOS device screen. The SpriteKit demo uses these functions to spawn the spinning square wherever we touch the screen. Do not worry about understanding these functions at this time. The update function runs once for every frame drawn to the screen. The SpriteKit demo does not use this function, but we may have reason to implement it later. Cleaning up I hope that you have absorbed some Swift syntax and gained an overview of Swift and SpriteKit. It is time to make room for our own game; let us clear all of that demo code out! We want to keep a little bit of the boilerplate, but we can delete most of what is inside the functions. To be clear, I do not expect you to understand this code yet. This is simply a necessary step towards the start of our journey. Please remove lines from your GameScene.swift file until it looks like the following code: import SpriteKit class GameScene: SKScene { override funcdidMove(to view: SKView) { } } Summary You have already accomplished a lot. You have had your first experience with Swift, installed and configured your development environment, launched code successfully into the iOS simulator. Great work! Resources for Article: Further resources on this subject: Swift for Open Source Developers [Article] Swift Power and Performance [Article] Introducing the Swift Programming Language [Article]
Read more
  • 0
  • 0
  • 14638

article-image-remote-access-management-console
Packt
02 Nov 2016
12 min read
Save for later

Remote Access Management Console

Packt
02 Nov 2016
12 min read
In this article by Jordan Krause, the author of Mastering Windows Server 2016, we will explore the Remote Access Management Console of DirectAccess and we will also look at the differences between DirectAccess and VPN. (For more resources related to this topic, see here.) You are well on your way to giving users remote access capabilities on this new server. As with many networking devices, once you have established all of your configurations on a remote access server, it is pretty common for admins to walk away and let it run. There is no need for a lot of ongoing maintenance or changes to that configuration once you have it running well. However, Remote Access Management Console in Windows Server 2016 is useful not only for configuration of the remote access parts and pieces, but for monitoring and reporting as well. Let’s take a look inside this console so that you are familiar with the different screens you will be interacting with: Configuration The configuration screen is pretty self-explanatory, this is where you would visit in order to create your initial remote access configuration, and where you go to update any settings in the future. As you can see in the screenshot, you are able to configure DirectAccess, VPN, and the Web Application Proxy right from this Remote Access Management Console. There is not a lot to configure as far as the VPN goes, you really only have one screen of options where you define what kind of IP addresses are handed down to the VPN clients connecting in, and how to handle VPN authentication. It is not immediately obvious where this screen is, so I wanted to point it out. Inside the DirectAccess and VPN configuration section, if you click on the Edit… button listed under Step 2, this will launch the Step 2 mini-wizard. The last screen of this mini-wizard is called VPN Configuration. This is the screen where you can configure these IP address and authentication settings for your VPN connections: Dashboard The Remote Access Dashboard gives you a 30,000 foot view of the Remote Access server status. You are able to view a quick status of the components running on the server, whether or not the latest configuration changes have been rolled around, and some summary numbers near the bottom about how many DirectAccess and VPN connections are happening. Operations Status If you want to drill down further into what is happening on the server side of the connections, that is what the Operations Status page is all about. Here you can see a little more detail on each of the components that are running under the hood to make your DA and VPN connections happen. If any of them have an issue, you can click on the specific component to get a little more information. For example, as a test, I have turned off the NLS web server in my lab network, and I can now see in the Operations Status page that NLS is flagged with an error. Remote Client Status Next up is the Remote Client Status screen. As indicated, this is the screen where we can monitor the client computers who are connected. It will show us both DirectAccess and VPN connections here. We will be able to see computer names, usernames, and even the resources that they are utilizing during their connections. The information on this screen is able to be filtered by simply putting any criteria into the Search bar on the top of the window. It is important to note that the Remote Client Status screen only shows live, active connections. There is no historical information stored here. Reporting You guessed it, this is the window you need to visit if you want to see historical remote access information. This screen is almost exactly the same as the Remote Client Status screen, except that you have the ability to generate reports for historical data pulled from date ranges of your choosing. Once the data is displayed, you have the same search and filtering capabilities that you had on the Remote Client Status screen. Reporting is disabled by default, but you simply need to navigate to the Reporting page and click on Configure Accounting. Once that is enabled, you will be presented with options about storing the historical information. You can choose to store the data in the local WID, or on a remote RADIUS server. You also have options here for how long to store logging data, and a mechanism that can be used to clear out old data. Tasks The last window pane of Remote Access Management Console that I want to point out is the Tasks bar on the right side of your screen. The actions and options that are displayed in this taskbar change depending on what part of the console you are navigating through. Make sure to keep an eye on this side of your screen for setting up some of the more advanced functions. Some examples of available tasks are creating usage reports, refreshing the screen, and configuring network load balancing or Multi-Site configurations if you are running multiple remote access servers. DirectAccess versus VPN VPN has been around for a very long time, making it a pretty familiar idea to anyone working in IT, and we have discussed quite a bit about DirectAccess today in order to bring you up to speed on this evolution, so to speak, of corporate remote access. Now that you know there are two great solutions built into Windows Server 2016 for enabling your mobile workforce, which one is better? While DirectAccess is certainly the newer of the technologies, we cannot say that it is better in all circumstances. Each has its pros and cons, and the ways that you use each, or both, will depend upon many variables. Your users, your client computers, and your organization’s individual needs will need to factor into your decision-making process. Let’s discuss some of the differences between DirectAccess and VPN so that you can better determine which is right for you. Domain-joined versus non-domain-joined One of the biggest requirements for a DirectAccess client computer is that it must be domain joined. While this requirement by itself doesn’t seem so major, what it implies can be pretty vast. Trusting a computer enough to be joined to your domain more than likely means that the laptop is owned by the company. It also probably means that this laptop was first in IT’s hands in order to build and prep it. Companies that are in the habit of allowing employees to purchase their own computers to be used for work purposes may not find DirectAccess to fit well with that model. DA is also not ideal for situations where employees use their existing home computers to connect into work remotely. In these kinds of situations, such as home and personally-owned computers, VPN may be better suited to the task. You can connect to a VPN from a non-domain-joined machine, and you can even establish VPN connections from many non-Microsoft devices. IOS, Android, Windows Phone—these are all platforms that have a VPN client built into them that can be used to tap into a VPN listener on a Windows Server 2016 remote access server. If your only remote access solution was DirectAccess, you would not be able to provide non-domain-joined devices with a connectivity platform. Auto versus manual launch Here, DirectAccess takes the cake. It is completely seamless. DirectAccess components are baked right into the Windows operating system, no software VPN is going to be able to touch that level of integration. With VPN, users have to log in to their computers to unlock them, then launch their VPN, then log in again to that VPN software, all before they can start working on anything. With DirectAccess, all they need to do is log in to the computer to unlock the screen. DirectAccess activates itself in the background so that as soon as the desktop loads for the user, they simply open the applications that they need to access, just like when they are inside the office. Software versus built-in I’m a fan of Ikea furniture. They do a great job of supplying quality products at a low cost, all while packaging it up in incredibly small boxes. After you pay for the product, unbox the product, put the product together, and then test the product to make sure it works—it’s great. If you can’t see where this is going, I’ll give you a hint. It’s an analogy for VPN. As in, you typically pay a vendor for their VPN product, unbox the product, implement the product at more expense, then test the product. That VPN software then has the potential to break and need reinstallation or reconfiguration, and will certainly come with software updates that need to be accomplished down the road. Maintenance, maintenance, maintenance. Maybe I have been watching too many home improvement shows lately, but I am a fan of houses with built-ins. Built-ins are essentially furniture that is permanent to the house, built right into the walls, corners, or wherever it happens to be. It adds value, and it integrates into the overall house much better than furniture that was pieced together separately and then stuck against the wall in the corner. DirectAccess is like a built-in. It is inside the operating system. There is no software to install, no software to update, no software to reinstall when it breaks. Everything that DA needs is already in Windows today, you just aren’t using it. Oh, and it’s free, well, built into the cost of your Windows license anyway. There are no user CALs, no ongoing licensing costs related to implementing Microsoft DirectAccess. Password and login issues with VPN If you have ever worked on a helpdesk for a company that uses VPN, you know what I’m talking about. There are a series of common troubleshooting calls that happen in the VPN world related to passwords. Sometimes the user forgets their password. Perhaps their password has expired and needs to be changed—ugh, VPN doesn’t handle this scenario very well either. Or maybe the employee changed their expired password on their desktop before they left work for the day, but are now trying to log in remotely from their laptop and it isn’t working. What is the solution to password problems with VPN? Reset the user’s password and then make the user come into the office in order to make it work on their laptop. Yup, these kinds of phone calls still happen every day. This is unfortunate, but a real potential problem with VPN. What’s the good news? DirectAccess doesn’t have these kinds of problems! Since DA is part of the operating system, it has the capability to be connected anytime that Windows is online. This includes the login screen! Even if I am sitting on the login or lock screen, and the system is waiting for me to input my username and password, as long as I have Internet access I also have a DirectAccess tunnel. This means that I can actively do password management tasks. If my password expires and I need to update it, it works. If I forgot my password and I can’t get into my laptop, I can call the helpdesk and simply ask them to reset my password. I can then immediately log in to my DirectAccess laptop with the new password, right from my house. Another cool function that this seamlessness enables is the ability to login with new user accounts. Have you ever logged into your laptop as a different user account in order to test something? Yup, that works over DirectAccess as well. For example, I am sitting at home and I need to help one of the sales guys troubleshoot some sort of file permission problem. I suspect it’s got something to do with his user account, so I want to log in to my laptop as him in order to test it. The problem is that his user account has never logged into my laptop before. With VPN, not a chance. This would never work. With DirectAccess, piece of cake! I simply log off, type in his username and password, and bingo. I’m logged in, while still sitting at home in my pajamas. It is important to note that you can run both DirectAccess and VPN on the same Windows Server 2016 remote access server. If both technologies have capabilities that you could benefit from, use them both! Summary The technology of today demands for most companies to enable their employees to work from wherever they are. More and more organizations are hiring a work from home workforce, and need a secure, stable, and efficient way to provide access of corporate data and applications to these mobile workers. The Remote Access role in Windows Server 2016 is designed to do exactly that. With three different ways of providing remote access to corporate resources, IT departments have never had so much remote access technology available at their fingertips, built right into the Windows operating system that they already own. If you are still supporting a third-party or legacy VPN system, you should definitely explore the new capabilities provided here and discover how much they could save for your business. DirectAccess is particularly impressive and compelling; it’s a brand new way of looking at remote access. Automatic connectivity includes always-on machines that are constantly being patched and updated because they are always connected to your management servers. You can improve user productivity and network security at the same time. These two things are usually oxymorons in the IT world, but with DirectAccess they hold hands and sing songs together. Resources for Article: Further resources on this subject: Remote Authentication [article] Configuring a MySQL linked server on SQL Server 2008 [article] Configuring sipXecs Server Features [article]
Read more
  • 0
  • 0
  • 7065
article-image-setting-environment-aspnet-mvc-6
Packt
02 Nov 2016
9 min read
Save for later

Setting Up the Environment for ASP.NET MVC 6

Packt
02 Nov 2016
9 min read
In this article by Mugilan TS Raghupathi author of the book Learning ASP.NET Core MVC Programming explains the setup for getting started with programming in ASP.NET MVC 6. In any development project, it is vital to set up the right kind of development environment so that you can concentrate on the developing the solution rather than solving the environment issues or configuration problems. With respect to .NET, Visual Studio is the de-facto standard IDE (Integrated Development Environment) for building web applications in .NET. In this article, you'll be learning about the following topics: Purpose of IDE Different offerings of Visual Studio Installation of Visual Studio Community 2015 Creating your first ASP.NET MVC 5 project and project structure (For more resources related to this topic, see here.) Purpose of IDE First of all, let us see why we need an IDE, when you can type the code in Notepad, compile, and execute it. When you develop a web application, you might need the following things for you to be productive: Code editor: This is the text editor where you type your code. Your code-editor should be able to recognize different constructs such as the if condition, for loop of your programming language. In Visual Studio, all of your keywords would be highlighted in blue color. Intellisense: Intellisense is a context aware code-completion feature available in most of the modern IDEs including Visual Studio. One such example is, when you type a dot after an object, this Intellisense feature lists out all the methods available on the object. This helps the developers to write code faster and easier. Build/Publish: It would be helpful if you could build or publish the application using a single click or single command. Visual Studio provides several options out of the box to build a separate project or to build the complete solution at a single click. This makes the build and deployment of your application easier. Templates: Depending on the type of the application, you might have to create different folders and files along with the boilerplate code. So, it'll be very helpful if your IDE supports the creation of different kinds of templates. Visual Studio generates different kinds of templates with the code for ASP.Net Web Forms, MVC, and Web API to get you up and running. Ease of addition of items: Your IDE should allow you to add different kinds of items with ease. For example, you should be able to add an XML file without any issues. And if there is any problem with the structure of your XML file, it should be able to highlight the issue along with the information and help you to fix the issues. Visual Studio offerings There are different versions of Visual Studio 2015 available to satisfy the various needs of the developers/organizations. Primarily, there are four versions of Visual Studio 2015: Visual Studio Community Visual Studio Professional Visual Studio Enterprise Visual Studio Test Professional System requirements Visual Studio can be installed on computers installed with Operation System Windows 7 Service Pack1 and above. You can get to know the complete list of requirements from the following URL: https://www.visualstudio.com/en-us/downloads/visual-studio-2015-system-requirements-vs.aspx Visual Studio Community 2015 This is a fully featured IDE available for building desktops, web applications, and cloud services. It is available free of cost for individual users. You can download Visual Studio Community from the following URL: https://www.visualstudio.com/en-us/products/visual-studio-community-vs.aspx Throughout this book, we will be using the Visual Studio Community version for development as it is available free of cost to individual developers. Visual Studio Professional As the name implies, Visual Studio Professional is targeted at professional developers which contains features such as Code Lens for improving your team's productivity. It also has features for greater collaboration within the team. Visual Studio Enterprise Visual Studio Enterprise is a full blown version of Visual Studio with a complete set of features for collaboration, including a team foundation server, modeling, and testing. Visual Studio Test Professional Visual Studio Test Professional is primarily aimed for the testing team or the people who are involved in the testing which might include developers. In any software development methodology—either the waterfall model or agile—developers need to execute the development suite test cases for the code they are developing. Installation of Visual Studio Community Follow the given steps to install Visual Studio Community 2015: Visit the following link to download Visual Studio Community 2015: https://www.visualstudio.com/en-us/products/visual-studio-community-vs.aspx Click on the Download Community 2015 button. Save the file in a folder where you can retrieve it easily later: Run the downloaded executable file: Click on Run and the following screen will appear: There are two types of installation—default and custom installation. Default installation installs the most commonly used features and this will cover most of the use cases of the developer. Custom installation helps you to customize the components that you want to get installed, such as the following: Click on the Install button after selecting the installation type. Depending on your memory and processor speed, it will take 1 to 2 hours to install. Once all the components are installed, you will see the following Setup completed screen: Installation of ASP.NET 5 When we install the Visual Studio Community 2015 edition, ASP.NET 5 will not have been installed by default. As the ASP.NET MVC 6 application runs on top of ASP.NET 5, we need to install ASP.NET 5. There are couple of ways to install ASP.NET 5: Get ASP.NET 5 from https://get.asp.net/ Another option is to install from the New Project template in Visual Studio This option is bit easier as you don't need to search and install. The following are the detailed steps: Create a new project by selecting File | New Project or using the shortcut Ctrl + Shift + N: Select ASP.NET Web Application and enter the project name and click on OK: The following window will appear to select the template. Select the Get ASP.NET 5 RC option as shown in the following screenshot: When you click on OK in the preceding screen, the following window will appear: When you click on the Run or Save button in the preceding dialog, you will get the following screen asking for ASP.NET 5 Setup. Select the checkbox, I agree to the license terms and conditions and click on the Install button: Installation of ASP.NET 5 might take couple of hours and once it is completed you'll get the following screen: During the process of installation of ASP.NET 5 RC1 Update 1, it might ask you to close the Visual Studio. If asked, please do so. Project structure in ASP.Net 5 application Once the ASP.NET 5 RC1 is successfully installed, open the Visual Studio and create a new project and select the ASP.NET 5 Web Application as shown in the following screenshot: A new project will be created and the structure will be like the following: File-based project Whenever you add a file or folder in your file system (inside of our ASP.NET 5 project folder), the changes will be automatically reflected in your project structure. Support for full .NET and .NET core You could see a couple of references in the preceding project: DNX 4.5.1 and DNX Core 5.0. DNX 4.5.1 provides functionalities of full-blown .NET whereas DNX Core 5.0 supports only the core functionalities—which would be used if you are deploying the application across cross-platforms such as Apple OS X, Linux. The development and deployment of an ASP.NET MVC 6 application on a Linux machine will be explained in the book. The Project.json package Usually in an ASP.NET web application, we would be having the assemblies as references and the list of references in a C# project file. But in an ASP.NET 5 application, we have a JSON file by the name of Project.json, which will contain all the necessary configuration with all its .NET dependencies in the form of NuGet packages. This makes dependency management easier. NuGet is a package manager provided by Microsoft, which makes the package installation and uninstallation easier. Prior to NuGet, all the dependencies had to be installed manually. The dependencies section identifies the list of dependent packages available for the application. The frameworks section informs about the frameworks being supported for the application. The scripts section identifies the script to be executed during the build process of the application. Include and exclude properties can be used in any section to include or exclude any item. Controllers This folder contains all of your controller files. Controllers are responsible for handling the requests and communicating the models and generating the views for the same. Models All of your classes representing the domain data will be present in this folder. Views Views are files which contain your frontend components and are presented to the end users of the application. This folder contains all of your Razor View files. Migrations Any database-related migrations will be available in this folder. Database migrations are the C# files which contain the history of any database changes done through an Entity Framework (an ORM framework). This will be explained in detail in the book. The wwwroot folder This folder acts as a root folder and it is the ideal container to place all of your static files such as CSS and JavaScript files. All the files which are placed in wwwroot folder can be directly accessed from the path without going through the controller. Other files The appsettings.json file is the config file where you can configure application level settings. Bower, npm (Node Package Manager), and gulpfile.js are client-side technologies which are supported by ASP.NET 5 applications. Summary In this article, you have learnt about the offerings in Visual Studio. Step-by-step instructions are provided for the installation of the Visual Studio Community version—which is freely available for individual developers. We have also discussed the new project structure of the ASP.Net 5 application and the changes when compared to the previous versions. In this book, we are going to discuss the controllers and their roles and functionalities. We'll also build a controller and associated action methods and see how it works. Resources for Article: Further resources on this subject: Designing your very own ASP.NET MVC Application [article] Debugging Your .NET Application [article] Using ASP.NET Controls in SharePoint [article]
Read more
  • 0
  • 0
  • 16502

article-image-magento-theme-distribution
Packt
02 Nov 2016
8 min read
Save for later

Magento Theme Distribution

Packt
02 Nov 2016
8 min read
"Invention is not enough. Tesla invented the electric power we use, but he struggled to get it out to people. You have to combine both things: invention and innovation focus, plus the company that can commercialize things and get them to people" – Larry Page In this article written by Fernando J Miguel, author of the book Magento 2 Theme Design Second Edition, you will learn the process of sharing, code hosting, validating, and publishing your subject as well as future components (extensions/modules) that you develop for Magento 2. (For more resources related to this topic, see here.) The following topics will be covered in this article: The packaging process Packaging your theme Hosting your theme The Magento marketplace The packaging process For every theme you develop for distribution in marketplaces and repositories through the sale and delivery of projects to clients and contractors of the service, you must follow some mandatory requirements for the theme to be packaged properly and consequently distributed to different Magento instances. Magento uses the composer.json file to define dependencies and information relevant to the developed component. Remember how the composer.json file is declared in the Bookstore theme: { "name": "packt/bookstore", "description": "BookStore theme", "require": { "php": "~5.5.0|~5.6.0|~7.0.0", "magento/theme-frontend-luma": "~100.0", "magento/framework": "~100.0" }, "type": "magento2-theme", "version": "1.0.0", "license": [ "OSL-3.0", "AFL-3.0" ], "autoload": { "files": [ "registration.php" ], "psr-4": { "Packt\BookStore\": "" } } } The main fields of the declaration components in the composer.json file are as follows: Name: A fully qualified component name Type: This declares the component type Autoload: This specifies the information necessary to be loaded in the component The three main types of Magento 2 component declarations can be described as follows: Module: Use the magento2-module type to declare modules that add to and/or modify functionalities in the Magento 2 system Theme: Use the magento2-theme type to declare themes in Magento 2 storefronts Language package: Use the magento2-language type to declare translations in the Magento 2 system Besides the composer.json file that must be declared in the root directory of your theme, you should follow these steps to meet the minimum requirements for packaging your new theme: Register the theme by declaring the registration.php file. Package the theme, following the standards set by Magento. Validate the theme before distribution. Publish the theme. From the minimum requirements mentioned, you already are familiar with the composer.json and registration.php files. Now we will look at the packaging process, validation, and publication in sequence. Packaging your theme By default, all themes should be compressed in ZIP format and contain only the root directory of the component developed, excluding any file and directory that is not part of the standard structure. The following command shows the compression standard used in Magento 2 components: zip -r vendor-name_package-name-1.0.0.zip package-path/* -x 'package-path/.git/*' Here, the name of the ZIP file has the following components: vendor: This symbolizes the vendor by which the theme was developed name_package: This is the package name name: This is the component name 1.0.0: This is the component version After formatting the component name, it defines which directory will be compressed, followed by the -x parameter, which excludes the git directory from the theme compression. How about applying ZIP compression on the Bookstore theme? To do this, follow these steps: Using a terminal or Command Prompt, access the theme's root directory: <magento_root>/app/design/frontend/Packt/bookstore. Run the zip packt-bookstore-bookstore.1.0.0.zip*-x'.git/*' command. Upon successfully executing this command, you will have packed your theme, and your directory will be as follows: After this, you will validate your new Magento theme using a verification tool. Magento component validation The Magento developer community created the validate_m2_package script to perform validation of components developed for Magento 2. This script is available on the GitHub repository of the Magento 2 development community in the marketplace-tools directory: According to the description, the idea behind Marketplace Tools is to house standalone tools that developers can use to validate and verify their extensions before submitting them to the Marketplace. Here's how to use the validation tool: Download the validate_m2_package.php script, available at https://github.com/magento/marketplace-tools. Move the script to the root directory of the Bookstore theme <magento_root>/app/design/frontend/Packt/bookstore. Open a terminal or Command Prompt. Run the validate_m2_package.php packt-bookstore-bookstore.1.0.0.zip PHP command. This command will validate the package you previously created with the ZIP command. If all goes well, you will not have any response from the command line, which will mean that your package is in line with the minimum requirements for publication. If you wish, you can use the -d parameter that enables you to debug your component by printing messages during verification. To use this option, run the following command: php validate_m2_package.php -d packt-bookstore-bookstore.1.0.0.zip If everything goes as expected, the response will be as follows: Hosting your theme You can share your Magento theme and host your code on different services to achieve greater interaction with your team or even with the Magento development community. Remembering that the standard control system software version used by the Magento development community is Git. There are some options well used in the market, so you can distribute your code and share your work. Let's look at some of these options. Hosting your project on GitHub and Packagist The most common method of hosting your code/theme is to use GitHub. Once you have created a repository, you can get help from the Magento developer community if you are working on an open source project or even one for learning purposes. The major point of using GitHub is the question of your portfolio and the publication of your Magento 2 projects developed, which certainly will make a difference when you are looking for employment opportunities and trying to get selected for new projects. GitHub has a specific help area for users that provides a collection of documentation that developers may find useful. GitHub Help can be accessed directly at https://help.github.com/: To create a GitHub repository, you can consult the official documentation, available at https://help.github.com/articles/create-a-repo/. Once you have your project published on GitHub, you can use the Packagist (https://packagist.org/) service by creating a new account and entering the link of your GitHub package on Packagist: Packagist collects information automatically from the available composer.json file in the GitHub repository, creating your reference to use in other projects. Hosting your project in a private repository In some cases, you will be developing your project for private clients and companies. In case you want to keep your version control in private mode, you can use the following procedure: Create your own package composer repository using the Toran service (https://toranproxy.com/). Create your package as previously described. Send your package to your private repository. Add the following to your composer.json file: { "repositories": [ { "type": "composer", "url": [repository url here] } ] } Magento Marketplace According to Magento, Marketplace (https://marketplace.magento.com/) is the largest global e-commerce resource for applications and services that extend Magento solutions with powerful new features and functionality. Once you have completed developing the first version of your theme, you can upload your project to be a part of the official marketplace of Magento. In addition to allowing theme uploads, Magento Marketplace also allows you to upload shared packages and extensions (modules). To learn more about shared packages, visit http://docs.magento.com/marketplace/user_guide/extensions/shared-package-submit.html. Submitting your theme After the compression and validation processes, you can send your project to be distributed to Magento Marketplace. For this, you should confirm an account on the developer portal (https://developer.magento.com/customer/account/) with a valid e-mail and personal information about the scope of your activities. After this confirmation, you will have access to the extensions area at https://developer.magento.com/extension/extension/list/, where you will find options to submit themes and extensions: After clicking on the Add Theme button, you will need to answer a questionnaire: Which Magento platform your theme will work on The name of your theme Whether your theme will have additional services Additional functionalities your theme has What makes your theme unique After the questionnaire, you will need to fill in the details of your extension, as follows: Extension title Public version Package file (upload) The submitted theme will be evaluated by a technical review, and you will be able to see the evaluation progress through your e-mail and the control panel of the Magento developer area. You can find more information about Magento Marketplace at the following link: http://docs.magento.com/marketplace/user_guide/getting-started.html Summary In this article, you learned about the theme-packaging process besides validation according to the minimum requirements for its publication on Magento Marketplace. You are now ready to develop your solutions! There is still a lot of work left, but I encourage you to seek your way as a Magento theme developer by putting a lot of study, research, and application into the area. Participate in events, be collaborative, and count on the community's support. Good luck and success in your career path! Resources for Article: Further resources on this subject: Installing Magento [article] Social Media and Magento [article] Magento 2 – the New E-commerce Era [article]
Read more
  • 0
  • 0
  • 36136

article-image-labview-basics
Packt
02 Nov 2016
8 min read
Save for later

LabVIEW Basics

Packt
02 Nov 2016
8 min read
In this article by Behzad Ehsani, author of the book Data Acquisition using LabVIEW, after a brief introduction and a short note on installation, we will go over the most widely used pallets and objects Icon tool bar from a standard installation of LabVIEW and a brief explanation of what each object does. (For more resources related to this topic, see here.) Introduction to LabVIEW LabVIEW is a graphical developing and testing environment unlike any other test and development tool available in the industry. LabVIEW sets itself apart from traditional programming environment by its complete graphical approach to programming. As an example, while representation of a while loop in a text based language such as the C language consists of several predefined, extremely compact and sometimes extremely cryptic lines of text, a while a loop in LabVIEW, is actually a graphical loop. The environment is extremely intuitive and powerful, which makes for a short learning cure for the beginner. LabVIEW is based on what is called G language, but there are still other languages, especially C under the hood. However, the ease of use and power of LabVIEW is somewhat deceiving to a novice user. Many people have attempt to start projects in LabVIEW only because at the first glace, the graphical nature of interface and the concept of drag an drop used in LabVIEW, appears to do away with required basics of programming concepts and classical education in programming science and engineering. This is far from the reality of using LabVIEW as the predominant development environment. While it is true that in many higher level development and testing environment, specially when using complicated test equipment and complex mathematical calculations or even creating embedded software LabVIEW's approach will be much more time efficient and bug free environment which otherwise would require several lines of code in traditional text based programming environment, one must be aware of LabVIEW's strengths and possible weaknesses. LabVIEW does not completely replace the need for traditional text based languages and depending on the entire nature of a project, LabVIEW or another traditional text based language such as C may be the most suitable programming or test environment. Installing LabVIEW Installation of LabVIEW is very simple and it is just as routine as any modern day program installation; that is, Insert the DVD 1 and follow on-screen guided installation steps. LabVIEW comes in one DVD for Mac and Linux version but in four or more DVDs for the Windows edition (depending on additional software, different licensing and additional libraries and packages purchased.) In this article we will use LabVIEW 2013 Professional Development version for Windows. Given the target audience of this article, we assume the user is well capable of installation of the program. Installation is also well documented by National Instruments and the mandatory one year support purchase with each copy of LabVIEW is a valuable source of live and email help. Also, NI web site (www.ni.com) has many user support groups that are also a great source of support, example codes, discussion groups and local group events and meeting of fellow LabVIEW developers, etc. One worthy note for those who are new to installation of LabVIEW is that the installation DVDs include much more than what an average user would need and pay for. We do strongly suggest that you install additional software (beyond what has been purchased and licensed or immediate need!) These additional software are fully functional (in demo mode for 7 days) which may be extended for about a month with online registration. This is a very good opportunity to have hands on experience with even more of power and functionality that LabVIEW is capable to offer. The additional information gained by installing other available software on the DVDs may help in further development of a given project. Just imagine if the current development of a robot only encompasses mechanical movements and sensors today, optical recognition probably is going to follow sooner than one may think. If data acquisition using expensive hardware and software may be possible in one location, the need to web sharing and remote control of the setup is just around the corner. It is very helpful to at least be aware of what packages are currently available and be able to install and test them prior to a full purchase and implementation. The following screenshot shows what may be installed if almost all software on all DVDs are selected: When installing a fresh version of LabVIEW, if you do decide to observe the advice above, make sure to click on the + sign next to each package you decide to install and prevent any installation of LabWindows/CVI.... and Measurement Studio... for Visual Studio LabWindows according to National Instruments .., is an ANSI C integrated development environment. Also note that by default NI device drivers are not selected to be installed. Device drivers are an essential part of any data acquisition and appropriate drivers for communications and instrument(s) control must be installed before LabVIEW can interact with external equipments. Also, note that device drivers (on Windows installations) come on a separate DVD which means that one does not have to install device drivers at the same time that the main application and other modules are installed; they can be installed at any time later on. Almost all well established vendors are packaging their product with LabVIEW drivers and example codes. If a driver is not readily available, National Instruments has programmers that would do just that. But this would come at a cost to the user. VI Package manager, now installed as a part of standard installation is also a must these days. National Instruments distributes third party software and drivers and public domain packages via VI Package manager. Appropriate software and drivers for these microcontrollers are installed via VI Package manager. You can install many public domain packages that further installs many useful LabVIEW toolkits to a LabVIEW installation and can be used just as those that are delivered professionally by National Instruments. Finally, note that the more modules, packages and software are selected to be installed the longer it will take to complete the installation. This may sound like making an obvious point but surprisingly enough installation of all software on the three DVDs (for Windows) take up over five hours! On a standard laptop or pc we used. Obviously a more powerful PC (such as one with solid sate hard drive) my not take such log time: LabVIEW Basics Once the LabVIEW applications is launched, by default two blank windows open simultaneously; a Front Panel and a Block Diagram window and a VI is created: VIs or Virtual Instruments are heart and soul of LabVIEW. They are what separate LabVIEW from all other text based development environments. In LabVIEW everything is an object which is represented graphically. A VI may only consist of a few objects or hundreds of objects embedded in many subVIs These graphical representation of a thing, be it a simple while loop, a complex mathematical concept such as polynomial interpolation or simply a Boolean constant are all graphically represented. To use an object right-click inside the block diagram or front panel window, a pallet list appears. Follow the arrow and pick an object from the list of object from subsequent pallet an place it on the appropriate window. The selected object now can be dragged and place on different location on the appropriate window and is ready to be wired. Depending on what kind of object is selected, a graphical representation of the object appears on both windows. Of cores there are many exceptions to this rule. For example a while loop can only be selected in Block Diagram and by itself, a while loop does not have a graphical representation on the front panel window. Needless to say, LabVIEW also has keyboard combination that expedite selecting and placing any given toolkit objects onto the appropriate window. Each object has one (or several) wire connections going into as input(s) and coming out as its output(s). A VI becomes functional when a minimum number of wires are appropriately connected to input and output of one or more object. Later, we will use an example to illustrate how a basic LabVIEW VI is created and executed. Highlights LabVIEW is a complete object-oriented development and test environment based on G language. As such it is a very powerful and complex environment. In article one we went through introduction to LabVIEW and its main functionality of each of its icon by way of an actual user interactive example. Accompanied by appropriate hardware (both NI as well as many industry standard test, measurement and development hardware products) LabVIEW is capable to cover from developing embedded systems to fuzzy logic and almost everything in between! Summary In this article we cover the basics of LabVIEW, from installation to in depth explanation of each and every element in the toolbar. Resources for Article: Further resources on this subject: Python Data Analysis Utilities [article] Data mining [article] PostgreSQL in Action [article]
Read more
  • 0
  • 0
  • 3704
article-image-getting-started-python-packages
Packt
02 Nov 2016
37 min read
Save for later

Getting Started with Python Packages

Packt
02 Nov 2016
37 min read
In this article by Luca Massaron and Alberto Boschetti the authors of the book Python Data Science Essentials - Second Edition we will cover steps on installing Python, the different installation packages and have a glance at the essential packages will constitute a complete Data Science Toolbox. (For more resources related to this topic, see here.) Whether you are an eager learner of data science or a well-grounded data science practitioner, you can take advantage of this essential introduction to Python for data science. You can use it to the fullest if you already have at least some previous experience in basic coding, in writing general-purpose computer programs in Python, or in some other data-analysis-specific language such as MATLAB or R. Introducing data science and Python Data science is a relatively new knowledge domain, though its core components have been studied and researched for many years by the computer science community. Its components include linear algebra, statistical modelling, visualization, computational linguistics, graph analysis, machine learning, business intelligence, and data storage and retrieval. Data science is a new domain and you have to take into consideration that currently its frontiers are still somewhat blurred and dynamic. Since data science is made of various constituent sets of disciplines, please also keep in mind that there are different profiles of data scientists depending on their competencies and areas of expertise. In such a situation, what can be the best tool of the trade that you can learn and effectively use in your career as a data scientist? We believe that the best tool is Python, and we intend to provide you with all the essential information that you will need for a quick start. In addition, other tools such as R and MATLAB provide data scientists with specialized tools to solve specific problems in statistical analysis and matrix manipulation in data science. However, only Python really completes your data scientist skill set. This multipurpose language is suitable for both development and production alike; it can handle small- to large-scale data problems and it is easy to learn and grasp no matter what your background or experience is. Created in 1991 as a general-purpose, interpreted, and object-oriented language, Python has slowly and steadily conquered the scientific community and grown into a mature ecosystem of specialized packages for data processing and analysis. It allows you to have uncountable and fast experimentations, easy theory development, and prompt deployment of scientific applications. At present, the core Python characteristics that render it an indispensable data science tool are as follows: It offers a large, mature system of packages for data analysis and machine learning. It guarantees that you will get all that you may need in the course of a data analysis, and sometimes even more. Python can easily integrate different tools and offers a truly unifying ground for different languages, data strategies, and learning algorithms that can be fitted together easily and which can concretely help data scientists forge powerful solutions. There are packages that allow you to call code in other languages (in Java, C, FORTRAN, R, or Julia), outsourcing some of the computations to them and improving your script performance. It is very versatile. No matter what your programming background or style is (object-oriented, procedural, or even functional), you will enjoy programming with Python. It is cross-platform; your solutions will work perfectly and smoothly on Windows, Linux, and Mac OS systems. You won't have to worry all that much about portability. Although interpreted, it is undoubtedly fast compared to other mainstream data analysis languages such as R and MATLAB (though it is not comparable to C, Java, and the newly emerged Julia language). Moreover, there are also static compilers such as Cython or just-in-time compilers such as PyPy that can transform Python code into C for higher performance. It can work with large in-memory data because of its minimal memory footprint and excellent memory management. The memory garbage collector will often save the day when you load, transform, dice, slice, save, or discard data using various iterations and reiterations of data wrangling. It is very simple to learn and use. After you grasp the basics, there's no better way to learn more than by immediately starting with the coding. Moreover, the number of data scientists using Python is continuously growing: new packages and improvements have been released by the community every day, making the Python ecosystem an increasingly prolific and rich language for data science. Installing Python First, let's proceed to introduce all the settings you need in order to create a fully working data science environment to test the examples and experiment with the code that we are going to provide you with. Python is an open source, object-oriented, and cross-platform programming language. Compared to some of its direct competitors (for instance, C++ or Java), Python is very concise.  It allows you to build a working software prototype in a very short time. Yet it has become the most used language in the data scientist's toolbox not just because of that. It is also a general-purpose language, and it is very flexible due to a variety of available packages that solve a wide spectrum of problems and necessities. Python 2 or Python 3? There are two main branches of Python: 2.7.x and 3.x. At the time of writing this article, the Python foundation (www.python.org) is offering downloads for Python version 2.7.11 and 3.5.1. Although the third version is the newest, the older one is still the most used version in the scientific area, since a few packages (check on the website py3readiness.org for a compatibility overview) won't run otherwise yet. In addition, there is no immediate backward compatibility between Python 3 and 2. In fact, if you try to run some code developed for Python 2 with a Python 3 interpreter, it may not work. Major changes have been made to the newest version, and that has affected past compatibility. Some data scientists, having built most of their work on Python 2 and its packages, are reluctant to switch to the new version. We intend to address a larger audience of data scientists, data analysts and developers, who may not have such a strong legacy with Python 2. Thus, we agreed that it would be better to work with Python 3 rather than the older version. We suggest using a version such as Python 3.4 or above. After all, Python 3 is the present and the future of Python. It is the only version that will be further developed and improved by the Python foundation and it will be the default version of the future on many operating systems. Anyway, if you are currently working with version 2 and you prefer to keep on working with it, you can still the examples. In fact, for the most part, our code will simply work on Python 2 after having the code itself preceded by these imports: from __future__ import (absolute_import, division, print_function, unicode_literals) from builtins import * from future import standard_library standard_library.install_aliases() The from __future__ import commands should always occur at the beginning of your scripts or else you may experience Python reporting an error. As described in the Python-future website (python-future.org), these imports will help convert several Python 3-only constructs to a form compatible with both Python 3 and Python 2 (and in any case, most Python 3 code should just simply work on Python 2 even without the aforementioned imports). In order to run the upward commands successfully, if the future package is not already available on your system, you should install it (version >= 0.15.2) using the following command to be executed from a shell: $> pip install –U future If you're interested in understanding the differences between Python 2 and Python 3 further, we recommend reading the wiki page offered by the Python foundation itself: wiki.python.org/moin/Python2orPython3. Step-by-step installation Novice data scientists who have never used Python (who likely don't have the language readily installed on their machines) need to first download the installer from the main website of the project, www.python.org/downloads/, and then install it on their local machine. We will now coversteps which will provide you with full control over what can be installed on your machine. This is very useful when you have to set up single machines to deal with different tasks in data science. Anyway, please be warned that a step-by-step installation really takes time and effort. Instead, installing a ready-made scientific distribution will lessen the burden of installation procedures and it may be well suited for first starting and learning because it saves you time and sometimes even trouble, though it will put a large number of packages (and we won't use most of them) on your computer all at once. This being a multiplatform programming language, you'll find installers for machines that either run on Windows or Unix-like operating systems. Please remember that some of the latest versions of most Linux distributions (such as CentOS, Fedora, Red Hat Enterprise, and Ubuntu) have Python 2 packaged in the repository. In such a case and in the case that you already have a Python version on your computer (since our examples run on Python 3), you first have to check what version you are exactly running. To do such a check, just follow these instructions: Open a python shell, type python in the terminal, or click on any Python icon you find on your system. Then, after having Python started, to test the installation, run the following code in the Python interactive shell or REPL: >>> import sys >>> print (sys.version_info) If you can read that your Python version has the major=2 attribute, it means that you are running a Python 2 instance. Otherwise, if the attribute is valued 3, or if the print statements reports back to you something like v3.x.x (for instance v3.5.1), you are running the right version of Python and you are ready to move forward. To clarify the operations we have just mentioned, when a command is given in the terminal command line, we prefix the command with $>. Otherwise, if it's for the Python REPL, it's preceded by >>>. The installation of packages Python won't come bundled with all you need, unless you take a specific premade distribution. Therefore, to install the packages you need, you can use either pip or easy_install. Both these two tools run in the command line and make the process of installation, upgrade, and removal of Python packages a breeze. To check which tools have been installed on your local machine, run the following command: $> pip To install pip, follow the instructions given at pip.pypa.io/en/latest/installing.html. Alternatively, you can also run this command: $> easy_install If both of these commands end up with an error, you need to install any one of them. We recommend that you use pip because it is thought of as an improvement over easy_install. Moreover, easy_install is going to be dropped in future and pip has important advantages over it. It is preferable to install everything using pip because: It is the preferred package manager for Python 3. Starting with Python 2.7.9 and Python 3.4, it is included by default with the Python binary installers. It provides an uninstall functionality. It rolls back and leaves your system clear if, for whatever reason, the package installation fails. Using easy_install in spite of pip's advantages makes sense if you are working on Windows because pip won't always install pre-compiled binary packages.Sometimes it will try to build the package's extensions directly from C source, thus requiring a properly configured compiler (and that's not an easy task on Windows). This depends on whether the package is running on eggs (and pip cannot directly use their binaries, but it needs to build from their source code) or wheels (in this case, pip can install binaries if available, as explained here: pythonwheels.com/). Instead, easy_install will always install available binaries from eggs and wheels. Therefore, if you are experiencing unexpected difficulties installing a package, easy_install can save your day (at some price anyway, as we just mentioned in the list). The most recent versions of Python should already have pip installed by default. Therefore, you may have it already installed on your system. If not, the safest way is to download the get-pi.py script from bootstrap.pypa.io/get-pip.py and then run it using the following: $> python get-pip.py The script will also install the setup tool from pypi.python.org/pypi/setuptools, which also contains easy_install. You're now ready to install the packages you need in order to run the examples provided in this article. To install the < package-name > generic package, you just need to run this command: $> pip install < package-name > Alternatively, you can run the following command: $> easy_install < package-name > Note that in some systems, pip might be named as pip3 and easy_install as easy_install-3 to stress the fact that both operate on packages for Python 3. If you're unsure, check the version of Python pip is operating on with: $> pip –V For easy_install, the command is slightly different: $> easy_install --version After this, the <pk> package and all its dependencies will be downloaded and installed. If you're not certain whether a library has been installed or not, just try to import a module inside it. If the Python interpreter raises an ImportError error, it can be concluded that the package has not been installed. This is what happens when the NumPy library has been installed: >>> import numpy This is what happens if it's not installed: >>> import numpy Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named numpy In the latter case, you'll need to first install it through pip or easy_install. Take care that you don't confuse packages with modules. With pip, you install a package; in Python, you import a module. Sometimes, the package and the module have the same name, but in many cases, they don't match. For example, the sklearn module is included in the package named Scikit-learn. Finally, to search and browse the Python packages available for Python, look at pypi.python.org. Package upgrades More often than not, you will find yourself in a situation where you have to upgrade a package because either the new version is required by a dependency or it has additional features that you would like to use. First, check the version of the library you have installed by glancing at the __version__ attribute, as shown in the following example, numpy: >>> import numpy >>> numpy.__version__ # 2 underscores before and after '1.9.2' Now, if you want to update it to a newer release, say the 1.11.0 version, you can run the following command from the command line: $> pip install -U numpy==1.11.0 Alternatively, you can use the following command: $> easy_install --upgrade numpy==1.11.0 Finally, if you're interested in upgrading it to the latest available version, simply run this command: $> pip install -U numpy You can alternatively run the following command: $> easy_install --upgrade numpy Scientific distributions As you've read so far, creating a working environment is a time-consuming operation for a data scientist. You first need to install Python and then, one by one, you can install all the libraries that you will need (sometimes, the installation procedures may not go as smoothly as you'd hoped for earlier). If you want to save time and effort and want to ensure that you have a fully working Python environment that is ready to use, you can just download, install, and use the scientific Python distribution. Apart from Python, they also include a variety of preinstalled packages, and sometimes, they even have additional tools and an IDE. A few of them are very well known among data scientists, and in the following content, you will find some of the key features of each of these packages. We suggest that you promptly download and install a scientific distribution, such as Anaconda (which is the most complete one). Anaconda (continuum.io/downloads) is a Python distribution offered by Continuum Analytics that includes nearly 200 packages, which comprises NumPy, SciPy, pandas, Jupyter, Matplotlib, Scikit-learn, and NLTK. It's a cross-platform distribution (Windows, Linux, and Mac OS X) that can be installed on machines with other existing Python distributions and versions. Its base version is free; instead, add-ons that contain advanced features are charged separately. Anaconda introduces conda, a binary package manager, as a command-line tool to manage your package installations. As stated on the website, Anaconda's goal is to provide enterprise-ready Python distribution for large-scale processing, predictive analytics, and scientific computing. Leveraging conda to install packages If you've decided to install an Anaconda distribution, you can take advantage of the conda binary installer we mentioned previously. Anyway, conda is an open source package management system, and consequently it can be installed separately from an Anaconda distribution. You can test immediately whether conda is available on your system. Open a shell and digit: $> conda -V If conda is available, there will appear the version of your conda; otherwise an error will be reported. If conda is not available, you can quickly install it on your system by going to conda.pydata.org/miniconda.html and installing the Miniconda software suitable for your computer. Miniconda is a minimal installation that only includes conda and its dependencies. conda can help you manage two tasks: installing packages and creating virtual environments. In this paragraph, we will explore how conda can help you easily install most of the packages you may need in your data science projects. Before starting, please check to have the latest version of conda at hand: $> conda update conda Now you can install any package you need. To install the <package-name> generic package, you just need to run the following command: $> conda install <package-name> You can also install a particular version of the package just by pointing it out: $> conda install <package-name>=1.11.0 Similarly you can install multiple packages at once by listing all their names: $> conda install <package-name-1> <package-name-2> If you just need to update a package that you previously installed, you can keep on using conda: $> conda update <package-name> You can update all the available packages simply by using the --all argument: $> conda update --all Finally, conda can also uninstall packages for you: $> conda remove <package-name> If you would like to know more about conda, you can read its documentation at conda.pydata.org/docs/index.html. In summary, as a main advantage, it handles binaries even better than easy_install (by always providing a successful installation on Windows without any need to compile the packages from source) but without its problems and limitations. With the use of conda, packages are easy to install (and installation is always successful), update, and even uninstall. On the other hand, conda cannot install directly from a git server (so it cannot access the latest version of many packages under development) and it doesn't cover all the packages available on PyPI as pip itself. Enthought Canopy Enthought Canopy (enthought.com/products/canopy) is a Python distribution by Enthought Inc. It includes more than 200 preinstalled packages, such as NumPy, SciPy, Matplotlib, Jupyter, and pandas. This distribution is targeted at engineers, data scientists, quantitative and data analysts, and enterprises. Its base version is free (which is named Canopy Express), but if you need advanced features, you have to buy a front version. It's a multiplatform distribution and its command-line install tool is canopy_cli. PythonXY PythonXY (python-xy.github.io) is a free, open source Python distribution maintained by the community. It includes a number of packages, which include NumPy, SciPy, NetworkX, Jupyter, and Scikit-learn. It also includes Spyder, an interactive development environment inspired by the MATLAB IDE. The distribution is free. It works only on Microsoft Windows, and its command-line installation tool is pip. WinPython WinPython (winpython.sourceforge.net) is also a free, open-source Python distribution maintained by the community. It is designed for scientists, and includes many packages such as NumPy, SciPy, Matplotlib, and Jupyter. It also includes Spyder as an IDE. It is free and portable. You can put WinPython into any directory, or even into a USB flash drive, and at the same time maintain multiple copies and versions of it on your system. It works only on Microsoft Windows, and its command-line tool is the WinPython Package Manager (WPPM). Explaining virtual environments No matter you have chosen installing a stand-alone Python or instead you used a scientific distribution, you may have noticed that you are actually bound on your system to the Python's version you have installed. The only exception, for Windows users, is to use a WinPython distribution, since it is a portable installation and you can have as many different installations as you need. A simple solution to break free of such a limitation is to use virtualenv that is a tool to create isolated Python environments. That means, by using different Python environments, you can easily achieve these things: Testing any new package installation or doing experimentation on your Python environment without any fear of breaking anything in an irreparable way. In this case, you need a version of Python that acts as a sandbox. Having at hand multiple Python versions (both Python 2 and Python 3), geared with different versions of installed packages. This can help you in dealing with different versions of Python for different purposes (for instance, some of the packages we are going to present on Windows OS only work using Python 3.4, which is not the latest release). Taking a replicable snapshot of your Python environment easily and having your data science prototypes work smoothly on any other computer or in production. In this case, your main concern is the immutability and replicability of your working environment. You can find documentation about virtualenv at virtualenv.readthedocs.io/en/stable, though we are going to provide you with all the directions you need to start using it immediately. In order to take advantage of virtualenv, you have first to install it on your system: $> pip install virtualenv After the installation completes, you can start building your virtual environments. Before proceeding, you have to take a few decisions: If you have more versions of Python installed on your system, you have to decide which version to pick up. Otherwise, virtualenv will take the Python version virtualenv was installed by on your system. In order to set a different Python version you have to digit the argument –p followed by the version of Python you want or inserting the path of the Python executable to be used (for instance, –p python2.7 or just pointing to a Python executable such as -p c:Anaconda2python.exe). With virtualenv, when required to install a certain package, it will install it from scratch, even if it is already available at a system level (on the python directory you created the virtual environment from). This default behavior makes sense because it allows you to create a completely separated empty environment. In order to save disk space and limit the time of installation of all the packages, you may instead decide to take advantage of already available packages on your system by using the argument --system-site-packages. You may want to be able to later move around your virtual environment across Python installations, even among different machines. Therefore you may want to make the functioning of all of the environment's scripts relative to the path it is placed in by using the argument --relocatable. After deciding on the Python version, the linking to existing global packages, and the relocability of the virtual environment, in order to start, you just launch the command from a shell. Declare the name you would like to assign to your new environment: $> virtualenv clone virtualenv will just create a new directory using the name you provided, in the path from which you actually launched the command. To start using it, you just enter the directory and digit activate: $> cd clone $> activate At this point, you can start working on your separated Python environment, installing packages and working with code. If you need to install multiple packages at once, you may need some special function from pip—pip freeze—which will enlist all the packages (and their version) you have installed on your system. You can record the entire list in a text file by this command: $> pip freeze > requirements.txt After saving the list in a text file, just take it into your virtual environment and install all the packages in a breeze with a single command: $> pip install -r requirements.txt Each package will be installed according to the order in the list (packages are listed in a case-insensitive sorted order). If a package requires other packages that are later in the list, that's not a big deal because pip automatically manages such situations. So if your package requires Numpy and Numpy is not yet installed, pip will install it first. When you're finished installing packages and using your environment for scripting and experimenting, in order to return to your system defaults, just issue this command: $> deactivate If you want to remove the virtual environment completely, after deactivating and getting out of the environment's directory, you just have to get rid of the environment's directory itself by a recursive deletion. For instance, on Windows you just do this: $> rd /s /q clone On Linux and Mac, the command will be: $> rm –r –f clone If you are working extensively with virtual environments, you should consider using virtualenvwrapper, which is a set of wrappers for virtualenv in order to help you manage multiple virtual environments easily. It can be found at bitbucket.org/dhellmann/virtualenvwrapper. If you are operating on a Unix system (Linux or OS X), another solution we have to quote is pyenv (which can be found at https://github.com/yyuu/pyenv). It lets you set your main Python version, allow installation of multiple versions, and create virtual environments. Its peculiarity is that it does not depend on Python to be installed and works perfectly at the user level (no need for sudo commands). conda for managing environments If you have installed the Anaconda distribution, or you have tried conda using a Miniconda installation, you can also take advantage of the conda command to run virtual environments as an alternative to virtualenv. Let's see in practice how to use conda for that. We can check what environments we have available like this: >$ conda info -e This command will report to you what environments you can use on your system based on conda. Most likely, your only environment will be just "root", pointing to your Anaconda distribution's folder. As an example, we can create an environment based on Python version 3.4, having all the necessary Anaconda-packaged libraries installed. That makes sense, for instance, for using the package Theano together with Python 3 on Windows (because of an issue we will explain in a few paragraphs). In order to create such an environment, just do: $> conda create -n python34 python=3.4 anaconda The command asks for a particular python version (3.4) and requires the installation of all packages available on the anaconda distribution (the argument anaconda). It names the environment as python34 using the argument –n. The complete installation should take a while, given the large number of packages in the Anaconda installation. After having completed all of the installation, you can activate the environment: $> activate python34 If you need to install additional packages to your environment, when activated, you just do: $> conda install -n python34 <package-name1> <package-name2> That is, you make the list of the required packages follow the name of your environment. Naturally, you can also use pip install, as you would do in a virtualenv environment. You can also use a file instead of listing all the packages by name yourself. You can create a list in an environment using the list argument and piping the output to a file: $> conda list -e > requirements.txt Then, in your target environment, you can install the entire list using: $> conda install --file requirements.txt You can even create an environment, based on a requirements' list: $> conda create -n python34 python=3.4 --file requirements.txt Finally, after having used the environment, to close the session, you simply do this: $> deactivate Contrary to virtualenv, there is a specialized argument in order to completely remove an environment from your system: $> conda remove -n python34 --all A glance at the essential packages We mentioned that the two most relevant characteristics of Python are its ability to integrate with other languages and its mature package system, which is well embodied by PyPI (the Python Package Index: pypi.python.org/pypi), a common repository for the majority of Python open source packages that is constantly maintained and updated. The packages that we are now going to introduce are strongly analytical and they will constitute a complete Data Science Toolbox. All the packages are made up of extensively tested and highly optimized functions for both memory usage and performance, ready to achieve any scripting operation with successful execution. A walkthrough on how to install them is provided next. Partially inspired by similar tools present in R and MATLAB environments, we will together explore how a few selected Python commands can allow you to efficiently handle data and then explore, transform, experiment, and learn from the same without having to write too much code or reinvent the wheel. NumPy NumPy, which is Travis Oliphant's creation, is the true analytical workhorse of the Python language. It provides the user with multidimensional arrays, along with a large set of functions to operate a multiplicity of mathematical operations on these arrays. Arrays are blocks of data arranged along multiple dimensions, which implement mathematical vectors and matrices. Characterized by optimal memory allocation, arrays are useful not just for storing data, but also for fast matrix operations (vectorization), which are indispensable when you wish to solve ad hoc data science problems: Website: www.numpy.org Version at the time of print: 1.11.0 Suggested install command: pip install numpy As a convention largely adopted by the Python community, when importing NumPy, it is suggested that you alias it as np: import numpy as np SciPy An original project by Travis Oliphant, Pearu Peterson, and Eric Jones, SciPy completes NumPy's functionalities, offering a larger variety of scientific algorithms for linear algebra, sparse matrices, signal and image processing, optimization, fast Fourier transformation, and much more: Website: www.scipy.org Version at time of print: 0.17.1 Suggested install command: pip install scipy pandas The pandas package deals with everything that NumPy and SciPy cannot do. Thanks to its specific data structures, namely DataFrames and Series, pandas allows you to handle complex tables of data of different types (which is something that NumPy's arrays cannot do) and time series. Thanks to Wes McKinney's creation, you will be able easily and smoothly to load data from a variety of sources. You can then slice, dice, handle missing elements, add, rename, aggregate, reshape, and finally visualize your data at will: Website: pandas.pydata.org Version at the time of print: 0.18.1 Suggested install command: pip install pandas Conventionally, pandas is imported as pd: import pandas as pd Scikit-learn Started as part of the SciKits (SciPy Toolkits), Scikit-learn is the core of data science operations on Python. It offers all that you may need in terms of data preprocessing, supervised and unsupervised learning, model selection, validation, and error metrics. Scikit-learn started in 2007 as a Google Summer of Code project by David Cournapeau. Since 2013, it has been taken over by the researchers at INRA (French Institute for Research in Computer Science and Automation): Website: scikit-learn.org/stable Version at the time of print: 0.17.1 Suggested install command: pip install scikit-learn Note that the imported module is named sklearn. Jupyter A scientific approach requires the fast experimentation of different hypotheses in a reproducible fashion. Initially named IPython and limited to working only with the Python language, Jupyter was created by Fernando Perez in order to address the need for an interactive Python command shell (which is based on shell, web browser, and the application interface), with graphical integration, customizable commands, rich history (in the JSON format), and computational parallelism for an enhanced performance. Jupyter is our favoured choice; it is used to clearly and effectively illustrate operations with scripts and data, and the consequent results: Website: jupyter.org Version at the time of print: 1.0.0 (ipykernel = 4.3.1) Suggested install command: pip install jupyter Matplotlib Originally developed by John Hunter, matplotlib is a library that contains all the building blocks that are required to create quality plots from arrays and to visualize them interactively. You can find all the MATLAB-like plotting frameworks inside the pylab module: Website: matplotlib.org Version at the time of print: 1.5.1 Suggested install command: pip install matplotlib You can simply import what you need for your visualization purposes with the following command: import matplotlib.pyplot as plt Statsmodels Previously part of SciKits, statsmodels was thought to be a complement to SciPy's statistical functions. It features generalized linear models, discrete choice models, time series analysis, and a series of descriptive statistics as well as parametric and nonparametric tests: Website: statsmodels.sourceforge.net Version at the time of print: 0.6.1 Suggested install command: pip install statsmodels Beautiful Soup Beautiful Soup, a creation of Leonard Richardson, is a great tool to scrap out data from HTML and XML files retrieved from the Internet. It works incredibly well, even in the case of tag soups (hence the name), which are collections of malformed, contradictory, and incorrect tags. After choosing your parser (the HTML parser included in Python's standard library works fine), thanks to Beautiful Soup, you can navigate through the objects in the page and extract text, tables, and any other information that you may find useful: Website: www.crummy.com/software/BeautifulSoup Version at the time of print: 4.4.1 Suggested install command: pip install beautifulsoup4 Note that the imported module is named bs4. NetworkX Developed by the Los Alamos National Laboratory, NetworkX is a package specialized in the creation, manipulation, analysis, and graphical representation of real-life network data (it can easily operate with graphs made up of a million nodes and edges). Besides specialized data structures for graphs and fine visualization methods (2D and 3D), it provides the user with many standard graph measures and algorithms, such as the shortest path, centrality, components, communities, clustering, and PageRank. Website: networkx.github.io Version at the time of print: 1.11 Suggested install command: pip install networkx Conventionally, NetworkX is imported as nx: import networkx as nx NLTK The Natural Language Toolkit (NLTK) provides access to corpora and lexical resources and to a complete suite of functions for statistical Natural Language Processing (NLP), ranging from tokenizers to part-of-speech taggers and from tree models to named-entity recognition. Initially, Steven Bird and Edward Loper created the package as an NLP teaching infrastructure for their course at the University of Pennsylvania. Now, it is a fantastic tool that you can use to prototype and build NLP systems: Website: www.nltk.org Version at the time of print: 3.2.1 Suggested install command: pip install nltk Gensim Gensim, programmed by Radim Rehurek, is an open source package that is suitable for the analysis of large textual collections with the help of parallel distributable online algorithms. Among advanced functionalities, it implements Latent Semantic Analysis (LSA), topic modelling by Latent Dirichlet Allocation (LDA), and Google's word2vec, a powerful algorithm that transforms text into vector features that can be used in supervised and unsupervised machine learning. Website: radimrehurek.com/gensim Version at the time of print: 0.12.4 Suggested install command: pip install gensim PyPy PyPy is not a package; it is an alternative implementation of Python 2.7.8 that supports most of the commonly used Python standard packages (unfortunately, NumPy is currently not fully supported). As an advantage, it offers enhanced speed and memory handling. Thus, it is very useful for heavy duty operations on large chunks of data and it should be part of your big data handling strategies: Website: pypy.org/ Version at time of print: 5.1 Download page: pypy.org/download.html XGBoost XGBoost is a scalable, portable, and distributed gradient boosting library (a tree ensemble machine learning algorithm). Initially created by Tianqi Chen from Washington University, it has been enriched by a Python wrapper by Bing Xu and an R interface by Tong He (you can read the story behind XGBoost directly from its principal creator at homes.cs.washington.edu/~tqchen/2016/03/10/story-and-lessons-behind-the-evolution-of-xgboost.html). XGBoost is available for Python, R, Java, Scala, Julia, and C++, and it can work on a single machine (leveraging multithreading) in both Hadoop and Spark clusters: Website: xgboost.readthedocs.io/en/latest Version at the time of print: 0.4 Download page: github.com/dmlc/xgboost Detailed instructions for installing XGBoost on your system can be found at this page: github.com/dmlc/xgboost/blob/master/doc/build.md The installation of XGBoost on both Linux and MacOS is quite straightforward, whereas it is a little bit trickier for Windows users. On a Posix system you just have For this reason, we provide specific installation steps to get XGBoost working on Windows: First download and install Git for Windows (git-for-windows.github.io). Then you need a MINGW compiler present on your system. You can download it from www.mingw.org accordingly to the characteristics of your system. From the command line, execute: $> git clone --recursive https://github.com/dmlc/xgboost $> cd xgboost $> git submodule init $> git submodule update Then, always from command line, copy the configuration for 64-byte systems to be the default one: $> copy makemingw64.mk config.mk Alternatively, you just copy the plain 32-byte version: $> copy makemingw.mk config.mk After copying the configuration file, you can run the compiler, setting it to use four threads in order to speed up the compiling procedure: $> mingw32-make -j4 In MinGW, the make command comes with the name mingw32-make. If you are using a different compiler, the previous command may not work; then you can simply try: $> make -j4 Finally, if the compiler completes its work without errors, you can install the package in your Python by this: $> cd python-package $> python setup.py install After following all the preceding instructions, if you try to import XGBoost in Python and yet it doesn't load and results in an error, it may well be that Python cannot find the MinGW's g++ runtime libraries. You just need to find the location on your computer of MinGW's binaries (in our case, it was in C:mingw-w64mingw64bin; just modify the next code to put yours) and place the following code snippet before importing XGBoost: import os mingw_path = 'C:\mingw-w64\mingw64\bin' os.environ['PATH']=mingw_path + ';' + os.environ['PATH'] import xgboost as xgb Depending on the state of the XGBoost project, similarly to many other projects under continuous development, the preceding installation commands may or may not temporarily work at the time you will try them. Usually waiting for an update of the project or opening an issue with the authors of the package may solve the problem. Theano Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. Basically, it provides you with all the building blocks you need to create deep neural networks. Created by academics (an entire development team; you can read their names on their most recent paper at arxiv.org/pdf/1605.02688.pdf), Theano has been used for large scale and intensive computations since 2007: Website: deeplearning.net/software/theano Release at the time of print: 0.8.2 In spite of many installation problems experienced by users in the past (expecially Windows users), the installation of Theano should be straightforward, the package being now available on PyPI: $> pip install Theano If you want the most updated version of the package, you can get it by Github cloning: $> git clone git://github.com/Theano/Theano.git Then you can proceed with direct Python installation: $> cd Theano $> python setup.py install To test your installation, you can run from shell/CMD and verify the reports: $> pip install nose $> pip install nose-parameterized $> nosetests theano If you are working on a Windows OS and the previous instructions don't work, you can try these steps using the conda command provided by the Anaconda distribution: Install TDM GCC x64 (this can be found at tdm-gcc.tdragon.net) Open an Anaconda prompt interface and execute: $> conda update conda $> conda update --all $> conda install mingw libpython $> pip install git+git://github.com/Theano/Theano.git Theano needs libpython, which isn't compatible yet with the version 3.5. So if your Windows installation is not working, this could be the likely cause. Anyway, Theano installs perfectly on Python version 3.4. Our suggestion in this case is to create a virtual Python environment based on version 3.4, install, and use Theano only on that specific version. Directions on how to create virtual environments are provided in the paragraph about virtualenv and conda create. In addition, Theano's website provides some information to Windows users; it could support you when everything else fails: deeplearning.net/software/theano/install_windows.html An important requirement for Theano to scale out on GPUs is to install Nvidia CUDA drivers and SDK for code generation and execution on GPU. If you do not know too much about the CUDA Toolkit, you can actually start from this web page in order to understand more about the technology being used: developer.nvidia.com/cuda-toolkit Therefore, if your computer has an NVidia GPU, you can find all the necessary instructions in order to install CUDA using this tutorial page from NVidia itself: docs.nvidia.com/cuda/cuda-quick-start-guide/index.html Keras Keras is a minimalist and highly modular neural networks library, written in Python and capable of running on top of either Theano or TensorFlow (the source software library for numerical computation released by Google). Keras was created by François Chollet, a machine learning researcher working at Google: Website: keras.io Version at the time of print: 1.0.3 Suggested installation from PyPI: $> pip install keras As an alternative, you can install the latest available version (which is advisable since the package is in continuous development) using the command: $> pip install git+git://github.com/fchollet/keras.git Summary In this article, we performed a lot of installations, from Python packages to examples.They were installed either directly or by using a scientific distribution. We also introduced Jupyter notebooks and demonstrated how you can have access to the data run in the tutorials. Resources for Article: Further resources on this subject: Python for Driving Hardware [Article] Mining Twitter with Python – Influence and Engagement [Article] Python Data Structures [Article]
Read more
  • 0
  • 0
  • 26199

article-image-learning-basic-nature-f-code
Packt
02 Nov 2016
6 min read
Save for later

Learning the Basic Nature of F# Code

Packt
02 Nov 2016
6 min read
In this article by Eriawan Kusumawardhono, author of the book, F# High Performance explains why F# has been a first class citizen, a built in part of programming languages support in Visual Studio, starting from Visual Studio 2010. Though F# is a programming language that has its own unique trait: it is a functional programming language but at the same time it has OOP support. F# from the start has run on .NET, although we can also run F# on cross-platform, such as Android (using Mono). (For more resources related to this topic, see here.) Although F# mostly runs faster than C# or VB when doing computations, its own performance characteristics and some not so obvious bad practices and subtleties may have led to performance bottlenecks. The bottlenecks may or may not be faster than C#/VB counterparts, although some of the bottlenecks may share the same performance characteristics, such as the use of .NET APIs. The main goal of this book is to identify performance problems in F#, measuring and also optimizing F# code to run more efficiently while also maintaining the functional programming style as appropriately as possible. A basic knowledge of F# (including the functional programming concept and basic OOP) is required as a prerequisite to start understanding the performance problems and the optimization of F#. There are many ways and definitions to define F# performance characteristics and at the same time measure them, but understanding the mechanics of running F# code, especially on top of .NET, is crucial and it's also a part of the performance characteristic itself. This includes other aspects of approaches to identify concurrency problems and language constructs. Understanding the nature of F# code Understanding the nature of F# code is very crucial and it is a definitive prerequisite before we begin to measure how long it runs and its effectiveness. We can measure a running F# code by running time, but to fully understand why it may run slow or fast, there are some basic concepts we have to consider first. Before we dive more into this, we must meet the basic requirements and setup. After the requirements have been set, we need to put in place the environment setting of Visual Studio 2015. We have to set this, because we need to maintain the consistency of the default setting of Visual Studio. The setting should be set to General. These are the steps: Select the Tools menu from Visual Studio's main menu. Select Import and Export Settings... and the Import and Export Settings Wizard screen is displayed. Select Reset all Settings and then Next to proceed. Select No, just reset my settings overwriting my current setting and then Next to proceed. Select General and then Next to proceed After setting it up, we will have a consistent layout to be used throughout this book, including the menu locations and the look and feel of Visual Studio. Now we are going to scratch the surface of F# runtime with an introductory overview of common F# runtime, which will give us some insights into F# performance. F# runtime characteristics The release of Visual Studio 2015 occurred at the same time as the release of .NET 4.6 and the rest of the tools, including the F# compiler. The compiler version of F# in Visual Studio 2015 is F# 4.0. F# 4.0 has no large differences or notable new features compared to the previous version, F# 3.0 in Visual Studio 2013. Its runtime characteristic is essentially the same as F# 4.0, although there are some subtle performance improvements and bug fixes. For more information on what's new in F# 4.0 (described as release notes) visit: https://github.com/Microsoft/visualfsharp/blob/fsharp4/CHANGELOG.md. At the time of writing this book, the online and offline MSDN Library of F# in Visual Studio does not have F# 4.0 release notes documentation, but can always go to the GitHub repository of F# to check the latest update. These are the common characteristics of F# as part of managed programming language: F# must conform to .NET CLR. This includes the compatibilities, the IL emitted after compile, and support for .NET BCL (the basic class library). Therefore, F# functions and libraries can be used by other CLR compliant languages such as C#, VB, and managed C++. The debug symbols (PDB) have the same format and semantic as other CLR compliant languages. This is important, because F# code must be able to be debugged from other CLR compliant languages as well. From the managed languages perspective, measuring performance of F# is similar when measured by tools such as the CLR profiler. But from a F# unique perspective, these are F#-only unique characteristics: By default, all types in F# are immutable. Therefore, it's safe to assume it is intrinsically thread safe. F# has a distinctive collection library, and it is immutable by default. It is also safe to assume it is intrinsically thread safe. F# has a strong type inference model, and when a generic type is inferred without any concrete type, it automatically performs generalizations. Default functions in F# are implemented internally by creating an internal class derived from F#’s FastFunc. This FastFunc is essentially a delegate that is used by F# to apply functional language constructs such as currying and partial application. With tail call recursive optimization in the IL, the F# compiler may emit .tail IL, and then the CLR will recognize this and perform optimization at runtime. F# has inline functions as option F# has a computation workflow that is used to compose functions F# async computation doesn't need Task<T> to implement it. Although F# async doesn't need the Task<T> object, it can operate well with the async-await model in C# and VB. The async-await model in C# and VB is inspired by F# async, but behaves semantically differently based on more things than just the usage of Task<T>. All of those characteristics are not only unique, but they can also have performance implications when used to interoperate with C# and VB. Summary This article explained the basic introduction to F# IDE, along with runtime characteristics of F#. Resources for Article: Further resources on this subject: Creating an F# Project [article] Unit Testing [article] Working with Windows Phone Controls [article]
Read more
  • 0
  • 0
  • 12609
Modal Close icon
Modal Close icon