Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-configuring-esp8266
Packt
14 Jun 2017
10 min read
Save for later

Configuring the ESP8266

Packt
14 Jun 2017
10 min read
In this article by Marco Schwartz the authors of the book ESP8266 Internet of Things Cookbook, we will learn following recipes: Setting up the Arduino development environment for the ESP8266 Choosing an ESP8266 Required additional components (For more resources related to this topic, see here.) Setting up the Arduino development environment for the ESP8266 To start us off, we will look at how to set up Arduino IDE development environment so that we can use it to program the ESP8266. This will involve installing the Arduino IDE and getting the board definitions for our ESP8266 module. Getting ready The first thing you should do is download the Arduino IDE if you do not already have it installed in your computer. You can do that from this link: https://www.arduino.cc/en/Main/Software. The webpage will appear as shown. It features that latest version of the Arduino IDE. Select your operating system and download the latest version that is available when you access the link (it was 1.6.13 at when this articlewas being written): When the download is complete, install the Arduino IDE and run it on your computer. Now that the installation is complete it is time to get the ESP8266 definitions. Open the preference window in the Arduino IDE from File|Preferences or by pressing CTRL+Comma. Copy this URL: http://arduino.esp8266.com/stable/package_esp8266com_index.json. Paste it in the filed labelled additional board manager URLs as shown in the figure. If you are adding other URLs too, use a comma to separate them: Open the board manager from the Tools|Board menu and install the ESP8266 platform. The board manager will download the board definition files from the link provided in the preferences window and install them. When the installation is complete the ESP8266 board definitions should appear as shown in the screenshot. Now you can select your ESP8266 board from Tools|Board menu: How it works… The Arduino IDE is an open source development environment used for programming Arduino boards and Arduino-based boards. It is also used to upload sketches to other open source boards, such as the ESP8266. This makes it an important accessory when creating Internet of Things projects. Choosing an ESP8266 board The ESP8266 module is a self-contained System On Chip (SOC) that features an integrated TCP/IP protocol stack that allows you to add Wi-Fi capability to your projects. The module is usually mounted on circuit boards that breakout the pins of the ESP8266 chip, making it easy for you program the chip and to interface with input and output devices. ESP8266 boards come in different forms depending on the company that manufactures them. All the boards use Espressif’s ESP8266 chip as the main controller, but have different additional components and different pin configurations, giving each board unique additional features. Therefore, before embarking on your IoT project, take some time to compare and contrast the different types of ESP8266 boards that are available. This way, you will be able to select the board that has features best suited for your project. Available options The simple ESP8266-01 module is the most basic ESP8266 board available in the market. It has 8 pins which include 4 General Purpose Input/Output (GPIO) pins, serial communication TX and RX pins, enable pin and power pins VCC and GND. Since it only has 4 GPIO pins, you can only connect three inputs or outputsto it. The 8-pin header on the ESP8266-01 module has a 2.0mm spacing which is not compatible with breadboards. Therefore, you have to look for another way to connect the ESP8266-01 module to your setup when prototyping. You can use female to male jumper wires to do that: The ESP8266-07 is an improved version of the ESP8266-01 module. It has 16 pins which comprise of 9 GPIO pins, serial communication TX and RX pins, a reset pin, an enable pin and power pins VCC and GND. One of the GPIO pins can be used as an analog input pin.The board also comes with a U.F.L. connector that you can use to plug an external antenna in case you need to boost Wi-Fi signal. Since the ESP8266 has more GPIO pins you can have more inputs and outputs in your project. Moreover, it supports both SPI and I2C interfaces which can come in handy if you want to use sensors or actuators that communicate using any of those protocols. Programming the board requires the use of an external FTDI breakout board based on USB to serial converters such as the FT232RL chip. The pads/pinholes of the ESP8266-07 have a 2.0mm spacing which is not breadboard friendly. To solve this, you have to acquire a plate holder that breaks out the ESP8266-07 pins to a breadboard compatible pin configuration, with 2.54mm spacing between the pins. This will make prototyping easier. This board has to be powered from a 3.3V which is the operating voltage for the ESP8266 chip: The Olimex ESP8266 module is a breadboard compatible board that features the ESP8266 chip. Just like the ESP8266-07 board, it has SPI, I2C, serial UART and GPIO interface pins. In addition to that it also comes with Secure Digital Input/Output (SDIO) interface which is ideal for communication with an SD card. This adds 6 extra pins to the configuration bringing the total to 22 pins. Since the board does not have an on-board USB to serial converter, you have to program it using an FTDI breakout board or a similar USB to serial board/cable. Moreover it has to be powered from a 3.3V source which is the recommended voltage for the ESP8266 chip: The Sparkfun ESP8266 Thing is a development board for the ESP8266 Wi-Fi SOC. It has 20 pins that are breadboard friendly, which makes prototyping easy. It features SPI, I2C, serial UART and GPIO interface pins enabling it to be interfaced with many input and output devices.There are 8 GPIO pins including the I2C interface pins. The board has a 3.3V voltage regulator which allows it to be powered from sources that provide more than 3.3V. It can be powered using a micro USB cable or Li-Po battery. The USB cable also charges the attached Li-Po battery, thanks to the Li-Po battery charging circuit on the board. Programming has to be done via an external FTDI board: The Adafruit feather Huzzah ESP8266 is a fully stand-alone ESP8266 board. It has built in USB to serial interface that eliminates the need for using an external FTDI breakout board to program it. Moreover, it has an integrated battery charging circuit that charges any connected Li-Po battery when the USB cable is connected. There is also a 3.3V voltage regulator on the board that allows the board to be powered with more than 3.3V. Though there are 28 breadboard friendly pins on the board, only 22 are useable. 10 of those pins are GPIO pins and can also be used for SPI as well as I2C interfacing. One of the GPIO pins is an analog pin: What to choose? All the ESP8266 boards will add Wi-Fi connectivity to your project. However, some of them lack important features and are difficult to work with. So, the best option would be to use the module that has the most features and is easy to work with. The Adafruit ESP8266 fits the bill. The Adafruit ESP8266 is completely stand-alone and easy to power, program and configure due to its on-board features. Moreover, it offers many input/output pins that will enable you to add more features to your projects. It is affordable andsmall enough to fit in projects with limited space. There’s more… Wi-Fi isn’t the only technology that we can use to connect out projects to the internet. There are other options such as Ethernet and 3G/LTE. There are shields and breakout boards that can be used to add these features to open source projects. You can explore these other options and see which works for you. Required additional components To demonstrate how the ESP8266 works we will use some addition components. These components will help us learn how to read sensor inputs and control actuators using the GPIO pins. Through this you can post sensor data to the internet and control actuators from the internet resources such as websites. Required components The components we will use include: Sensors DHT11 Photocell Soil humidity Actuators Relay Powerswitch tail kit Water pump Breadboard Jumper wires Micro USB cable Sensors Let us discuss the three sensors we will be using. DHT11 The DHT11 is a digital temperature and humidity sensor. It uses a thermistor and capacitive humidity sensor to monitor the humidity and temperature of the surrounding air and produces a digital signal on the data pin. A digital pin on the ESP8266 can be used to read the data from the sensor data pin: Photocell A photocell is a light sensor that changes its resistance depending on the amount of incident light it is exposed to. They can be used in a voltage divider setup to detect the amount of light in the surrounding. In a setup where the photocell is used in the Vcc side of the voltage divider, the output of the voltage divider goes high when the light is bright and low when the light is dim. The output of the voltage divider is connected to an analog input pin and the voltage readings can be read: Soil humidity sensor The soil humidity sensor is used for measuring the amount of moisture in soil and other similar materials. It has two large exposed pads that act as a variable resistor. If there is more moisture in the soil the resistance between the pads reduces, leading to higher output signal. The output signal is connected to an analog pin from where its value is read: Actuators Let’s discuss about the actuators. Relays A relay is a switch that is operated electrically. It uses electromagnetism to switch large loads using small voltages. It comprises of three parts: a coil, spring and contacts. When the coil is energized by a HIGH signal from a digital pin of the ESP8266 it attracts the contacts forcing them closed. This completes the circuit and turns on the connected load. When the signal on the digital pin goes LOW, the coil is no longer energized and the spring pulls the contacts apart. This opens the circuit and turns of the connected load: Power switch tail kit A power switch tail kit is a device that is used to control standard wall outlet devices with microcontrollers. It is already packaged to prevent you from having to mess around with high voltage wiring. Using it you can control appliances in your home using the ESP8266: Water pump A water pump is used to increase the pressure of fluids in a pipe. It uses a DC motor to rotate a fan and create a vacuum that sucks up the fluid. The sucked fluid is then forced to move by the fan, creating a vacuum again that sucks up the fluid behind it. This in effect moves the fluid from one place to another: Breadboard A breadboard is used to temporarily connect components without soldering. This makes it an ideal prototyping accessory that comes in handy when building circuits: Jumper wires Jumper wires are flexible wires that are used to connect different parts of a circuit on a breadboard: Micro USB cable A micro USB cable will be used to connect the Adafruit ESP8266 board to the compute: Summary In this article we have learned how to setting up the Arduino development environment for the ESP8266,choosing an ESP8266, and required additional components.  Resources for Article: Further resources on this subject: Internet of Things with BeagleBone [article] Internet of Things Technologies [article] BLE and the Internet of Things [article]
Read more
  • 0
  • 0
  • 44643

article-image-docker-swarm
Packt
14 Jun 2017
8 min read
Save for later

Docker Swarm

Packt
14 Jun 2017
8 min read
In this article by Russ McKendrick, the author of the book Docker Bootcamp, we will cover following topics: Creating a Swarm manually Launching a service (For more resources related to this topic, see here.) Creating a Swarm manually To start off with we need to launch the hosts and to do this, run the following commands, remembering to replace the digital ocean API access token with your own: docker-machine create --driver digitalocean --digitalocean-access-token 57e4aeaff8d7d1a8a8e46132969c2149117081536d50741191c79d8bc083ae73 swarm01 docker-machine create --driver digitalocean --digitalocean-access-token 57e4aeaff8d7d1a8a8e46132969c2149117081536d50741191c79d8bc083ae73 swarm02 docker-machine create --driver digitalocean --digitalocean-access-token 57e4aeaff8d7d1a8a8e46132969c2149117081536d50741191c79d8bc083ae73 swarm03 Once launched, running docker-machine ls should show you a list of your images. Also, this should be reflected in your digital ocean control panel: Now we have our Docker hosts and we need to assign a role to each of the nodes within the cluster. Docker Swarm has two node roles: Manager: A manager is a node which dispatches tasks to the workers, all your interaction with the Swarm cluster will be targeted against a manager node.You can have more than one manger node, however in this example we will be using just one. Worker: Worker nodes accept the tasks dispatched by the manager node, these are where all your services are launched. We will go in to services in more detail once we have our cluster configured. In our cluster, swarm01 will be the manager node with swarm02 and swarm03 being our two worker nodes. We are going to use the docker-machine ssh command to execute commands directly on our three nodes, starting with configuring our manager node. The commands in the walk through will only work with Mac and Linux, commands to run on Windows will be covered at the end of this section. Before we initialize the manager node, we need to capture the IP address of swarm01 as a command-line variable: managerIP=$(docker-machine ip swarm01) Now that we have the IP address, run the following command to check if it is correct: echo $managerIP And then to configure the manager node to run: docker-machine ssh swarm01 docker swarm init --advertise-addr $managerIP You will then receive confirmation that swarm01 is now a manager along with instructions on what to run to add a worker to the cluster: You don’t have to a make a note of the instructions as we will be running the command in a slightly different way. To add our two workers, we need to capture the join token in a similar way we captured the IP address of our manager node using the $managerIP variable, to do this run: joinToken=$(docker-machine ssh swarm01 docker swarm join-token -q worker) Again, you echo the variable out to check that it is valid: echo $joinToken Now it’s time to add our two worker nodes into the cluster by running: docker-machine ssh swarm02 docker swarm join --token $joinToken $managerIP:2377 docker-machine ssh swarm03 docker swarm join --token $joinToken $managerIP:2377 You should see something same as the following terminal output: Connecting your local Docker client to the manager node using: eval $(docker-machine env swarm01) and then running a docker-machine ls again shows. As you can see from the list of hosts,swarm01 is now active but there is nothing in the SWARM column, why is that? Confusingly, there are two different types of Docker Swarm cluster, there is the Legacy Docker Swarm which was managed by Docker machine, and then there is the new Docker Swarm mode which is managed by the Docker engine itself. We have a launched a Docker Swarm mode cluster, this is now the preferred way of launching Swarm, the legacy Docker Swarm is slowly being retired. To get a list of the nodes within our Swarm cluster we need to run the following command: For information on each node you can run the following command (the --pretty flag renders the JSON output from the Docker API): docker node inspect swarm01--pretty You are given a wealth of information about the host, including the fact that it is a manager and it has been launched in digital ocean. Running the same command, but for a worker node shows using similar information: docker node inspect swarm02 --pretty However, as the node is not a manager that section is missing. Before we look at launching services into our cluster we should look at how to launch our cluster using Docker machine on Windows as there are a few differences in the commands used due differences between powershell and bash. First, we need to launch the three hosts: docker-machine.exe create --driver digitalocean --digitalocean-access-token 57e4aeaff8d7d1a8a8e46132969c2149117081536d50741191c79d8bc083ae73 swarm01 docker-machine.exe create --driver digitalocean --digitalocean-access-token 57e4aeaff8d7d1a8a8e46132969c2149117081536d50741191c79d8bc083ae73 swarm02 docker-machine.exe create --driver digitalocean --digitalocean-access-token 57e4aeaff8d7d1a8a8e46132969c2149117081536d50741191c79d8bc083ae73 swarm03 Once the three hosts are up and running: You can create the manager node by running: $managerIP = $(docker-machine.exe ip swarm01) echo $managerIP docker-machine.exe ssh swarm01 docker swarm init --advertise-addr $managerIP Once you have your manager you can add the two worker nodes: $joinIP = “$(docker-machine.exe ip swarm01):2377” echo $joinIP $joinToken = $(docker-machine.exe ssh swarm01 docker swarm join-token -q worker) echo $joinToken docker-machine.exe ssh swarm02 docker swarm join --token $joinToken $joinIP docker-machine.exe ssh swarm03 docker swarm join --token $joinToken $joinIP and then configure your local Docker client to use your manager node and check the cluster status: docker-machine.exe env --shell powershell swarm01 | Invoke-Expression docker-machine.exe ls docker node ls At this stage, no matter which operating system you are using, you should have a three node Docker Swarm cluster in digital ocean, we can now look at a launching service into our cluster. Launching a service Rather than launching containers using the docker container run command you need to create a service A service defines a task which the manager then passes to one of the workers and then a container is launched: docker service create --name cluster -p:80:80/tcp russmckendrick/cluster That’s it, we should now have a single container running on one of our three nodes. To check that the service is running and get a little more information about the service, run the following commands: docker service ls docker service inspect cluster --pretty Now that we have confirmed that our service is running, you will be able to open your browser and enter the IP address of one of your three nodes (which you can get by running docker-machine ls).One of the features of Docker Swarm is it’s routing mesh: A routing mesh? When we exposed the port using the -p:80:80/tcp flag, we did a little more than map port 80 on the host to port 80 on the container, we actually created a Swarm load balancer on port 80 across all of the hosts within the cluster. The Swarm load balancer then directs requests to containers within our cluster. Running the commands as shown following, should show you which tasks are running on which nodes, remember tasks are containers which have been launched by the service: docker node ps swarm01 docker node ps swarm02 docker node ps swarm03 Like me, you probably have your single task running on swarm01: We can make things more interesting by scaling our service to add more tasks, to do this simply run the following commands to scale and check our service: docker service scale cluster=6 docker service ls docker service inspect cluster --pretty As you should see, we now have 6 tasks running within our cluster service. Checking the nodes should show that the tasks are evenly distributed between our three nodes: docker node ps swarm01 docker node ps swarm02 docker node ps swarm03 Hitting refresh in your browser should also update the hostname under the Docker image change, another way of seeing this on Mac and Linux is to run the following command: curl -s http://$(docker-machine ip swarm01)/ | grep class= As you can see from the terminal in following output, our requests are being load balanced between the running tasks: Before we terminate our Docker Swarm cluster let’s look at another way we can launch services, before we do we need to remove the currently running service, to do this simply run: docker service rm cluster Summary In this article we have learned how to create a Swarm manually, and how to launch a service. Resources for Article: Further resources on this subject: Orchestration with Docker Swarm [article] Hands On with Docker Swarm [article] Introduction to Docker [article]
Read more
  • 0
  • 0
  • 4741

article-image-iot-analytics-cloud
Packt
14 Jun 2017
19 min read
Save for later

IoT Analytics for the Cloud

Packt
14 Jun 2017
19 min read
In this article by Andrew Minteer, author of the book Analytics for the Internet of Things (IoT), that you understand how your data is transmitted back to the corporate servers, you feel you have more of a handle on it. You also have a reference frame in your head on how it is operating out in the real world. (For more resources related to this topic, see here.) Your boss stops by again. "Is that rolling average job done running yet?", he asks impatiently. It used to run fine and finished in an hour three months ago. It has steadily taken longer and longer and now sometimes does not even finish. Today, it has been going on six hours and you are crossing your fingers. Yesterday it crashed twice with what looked like out of memory errors. You have talked to your IT group and finance group about getting a faster server with more memory. The cost would be significant and likely will take months to complete the process of going through purchasing, putting it on order, and having it installed. Your friend in finance is hesitant to approve it. The money was not budgeted for this fiscal year. You feel bad especially since this is the only analytic job causing you problems. It just runs once a month but produces key data. Not knowing what else to say, you give your boss a hopeful, strained smile and show him your crossed fingers. “It’s still running...that’s good, right?” This article is about the advantages to cloud based infrastructure for handling and analyzing IoT data. We will discuss cloud services including Amazon Web Services (AWS), Microsoft Azure, and Thingworx. You will learn how to implement analytics elastically to enable a wide variety of capabilities. This article will cover: Building elastic analytics Designing for scale Cloud security and analytics Key Cloud Providers Amazon AWS Microsoft Azure PTC ThingWorx Building elastic analytics IoT data volumes increase quickly. Analytics for IoT is particularly compute intensive at times that are difficult to predict. Business value is uncertain and requires a lot of experimentation to find the right implementation. Combine all that together and you need something that scales quickly, is dynamic and responsive to resource needs, and virtually unlimited capacity at just the right time. And all of that needs to be implemented quickly with a low cost and low maintenance needs. Enter the cloud. IoT Analytics and cloud infrastructure fit together like a hand in a glove. What is the cloud infrastructure? The National Institute of Standards and Technology defines five essential characteristics: On-demand self-service: You can provision things like servers and storage as needed and without interacting with someone. Broad network access: Your cloud resources are accessible over the internet (if enabled) by various methods such as web browser or mobile phone. Resource pooling: Cloud providers pool their servers and storage capacity across many customers using a multi-tenant model. Resources, both physical and virtual, are dynamically assigned and reassigned as needed. Specific location of resources is unknown and generally unimportant. Rapid elasticity: Your resources can be elastically created and destroyed. This can happen automatically as needed to meet demand. You can scale outward rapidly. You can also contract rapidly. Supply of resources is effectively unlimited from your viewpoint. Measured service: Resource usage is monitored, controlled, and reported by the cloud provider. You have access to the same information, providing transparency to your utilization. Cloud systems continuously optimize resources automatically. There is a notion of private clouds that exist on premises or custom built by a third party for a specific organization. For our concerns, we will be discussing public clouds only. By and large, most analytics will be done on public clouds so we will concentrate our efforts there. The capacity available at your fingertips on public clouds is staggering. AWS, as of June 2016, has an estimated 1.3 million servers online. These servers are thought to be three times more efficient than enterprise systems. Cloud providers own the hardware and maintain the network and systems required for the available services. You just have to provision what you need to use, typically through a web application. Providers offer different levels of abstractions. They offer lower level servers and storage where you have fine grained control. They also offer managed services that handle the provisioning of servers, networking, and storage for you. These are used in conjunction with each other without much distinction between the two. Hardware failures are handled automatically. Resources are transferred to new hardware and brought back online. The physical components become unimportant when you design for the cloud, it is abstracted away and you can focus on resource needs. The advantages to using the cloud: Speed: You can bring cloud resources online in minutes. Agility: The ability to quickly create and destroy resources leads to ease of experimentation. This increases the agility of analytics organizations. Variety of services: Cloud providers have many services available to support analytics workflows that can be deployed in minutes. These services manage hardware and storage needs for you. Global reach: You can extend the reach of analytics to the other side of the world with a few clicks. Cost control: You only pay for the resources you need at the time you need them. You can do more for less. To get an idea of the power that is at your fingertips, here is an architectural diagram of something NASA built on AWS as part of an outreach program to school children. Source: Amazon Web Services; https://aws.amazon.com/lex/ By speaking voice commands, it will communicate with a Mars Rover replica to retrieve IoT data such as temperature readings. The process includes voice recognition, natural speech generation from text, data storage and processing, interaction with IoT device, networking, security, and ability to send text messages. This was not a years worth of development effort, it was built by tying together cloud based services already in place. And it is not just for big, funded government agencies like NASA. All of these services and many more are available to you today if your analytics runs in the cloud. Elastic analytics concepts What do we mean by Elastic Analytics? Let’s define it as designing your analytics processes so scale is not a concern. You want your focus to be on the analytics and not on the underlying technology. You want to avoid constraining your analytics capability so it will fit within some set hardware limitations. Focus instead on potential value versus costs. Trade hardware constraints for cost constraints. You also want your analytics to be able to scale. It should go from supporting 100 IoT devices to 1 Million IoT devices without requiring any fundamental changes. All that should happen if the costs increase. This reduces complexity and increases maintainability. That translates into lower costs which enables you to do more analytics. More analytics increases the probability of finding value. Finding more value enables even more analytics. Virtuous circle! Some core Elastic Analytics concepts: Separate compute from storage: We are used to thinking about resources like laptop specifications. You buy one device that has 16GB memory and 500GB hard drive because you think that will meet 90% of your needs and it is the top of your budget. Cloud infrastructure abstracts that away. Doing analytics in the cloud is like renting a magic laptop where you can change 4GB memory into 16GB by snapping your fingers. Your rental bill increases for only the time you have it at 16GB. You snap your fingers again and drop it back down to 4GB to save some money. Your hard drive can grow and shrink independently of the memory specification. You are not stuck having to choose a good balance between them. You can match compute needs with requirements. Build for scale from the start: Use software, services, and programming code that can scale from 1 to 1 million without changes. Each analytic process you put in production has continuing maintenance efforts to it that will build up over time as you add more and more. Make it easy on yourself later on. You do not want to have to stop what you are doing to re-architect a process you built a year ago because it hit limits of scale. Make your bottleneck wetware not hardware: By wetware, we mean brain power. “My laptop doesn’t have enough memory to run the job” should never be the problem. It should always be “I haven’t figured it out yet, but I have several possibilities in test as we speak.” Manage to a spend budget not to available hardware: Use as many cloud resources as you need as long as it fits within your spend budget. There is no need to limit analytics to fit within a set number of servers when you run analytics in the cloud. Traditional enterprise architecture purchases hardware ahead of time which incurs a capital expense. Your finance guy does not (usually) like capital expense. You should not like it either, as it means a ceiling has just been set on what you can do (at least in the near term). Managing to spend means keeping an eye on costs, not on resource limitations. Expand when needed and make sure to contract quickly to keep costs down. Experiment, experiment, experiment: Create resources, try things out, kill them off if it does not work. Then try something else. Iterate to the right answer. Scale out resources to run experiments. Stretch when you need to. Bring it back down when you are done. If Elastic Analytics is done correctly, you will find your biggest limitations are Time and Wetware. Not hardware and capital. Design with the endgame in mind Consider how the analytics you develop in the cloud would end up if successful. Would it turn into a regularly updated dashboard? Would it be something deployed to run under certain conditions to predict customer behavior? Would it periodically run against a new set of data and send an alert if an anomaly is detected? When you list out the likely outcomes, think about how easy it would be to transition from the analytics in development to the production version that will be embedded in your standard processes. Choose tools and analytics that make that transition quick and easy. Designing for scale Following some key concepts will help keep changes to your analytics processes to a minimum, as your needs scale. Decouple key components Decoupling means separating functional groups into components so they are not dependent upon each other to operate. This allows functionality to change or new functionality to be added with minimal impact on other components. Encapsulate analytics Encapsulate means grouping together similar functions and activity into distinct units. It is a core principle of object oriented programming and you should employ it in analytics as well. The goal is to reduce complexity and simplify future changes. As your analytics develop, you will have a list of actions that is either transforming the data, running it through a model or algorithm, or reacting to the result. It can get complicated quickly. By encapsulating the analytics, it is easier to know where to make changes when needed down the road. You will also be able reconfigure parts of the process without affecting the other components. Encapsulation process is carried out in the following steps: Make a list of the steps. Organize them into groups. Think which groups are likely to change together. Separate the groups that are independent into their own process It is a good idea to have the data transformation steps separate from the analytical steps if possible. Sometimes the analysis is tightly tied to the data transformation and it does not make sense to separate, but in most cases it can be separated. The action steps based on the analysis results almost always should be separate. Each group of steps will also have its own resource needs. By encapsulating them and separating the processes, you can assign resources independently and scale more efficiently where you need it. You can do more with less. Decouple with message queues Decoupling encapsulated analytics processes with message queues has several advantages. It allows for change in any process without requiring the other ones to adjust. This is because there is no direct link between them. It also builds in some robustness in case one process has a failure. The queue can continue to expand without losing data while the down process restarts and nothing will be lost after things get going again. What is a message queue? Simple diagram of a message queue New data comes into a queue as a message, it goes into line for delivery, and then is delivered to the end server when it gets its turn. The process adding a message is called the publisher and the process receiving the message is called the subscriber. The message queue exists regardless of if the publisher or subscriber is connected and online. This makes it robust against intermittent connections (intentional or unintentional). The subscriber does not have to wait until the publisher is willing to chat and vice versa. The size of the queue can also grow and shrink as needed. If the subscriber gets behind, the queue just grows to compensate until it can catch up. This can be useful if there is a sudden burst in messages by the publisher. The queue will act as a buffer and expand to capture the messages while the subscriber is working through the sudden influx. There is a limit, of course. If the queue reaches some set threshold, it will reject (and you will most likely lose) any incoming messages until the queue gets back under control. A contrived but real world example of how this can happen: Joe Cut-rate (the developer): Hey, when do you want this doo-hickey device to wake up and report? Jim Unawares (the engineer): Every 4 hours Joe Cut-rate: No sweat. I’ll program it to start at 12am UTC, then every 4 hours after. How many of these you gonna sell again? Jim Unawares: About 20 million Joe Cut-rate: Um….friggin awesome! I better hardcode that 12am UTC then, huh? 4 months later Jim Unawares: We’re only getting data from 10% of the devices. And it is never the same 10%. What the heck? Angela the analyst: Every device in the world reports at exactly the same time, first thing I checked. The message queues are filling up since our subscribers can’t process that fast, new messages are dropped. If you hard coded the report time, we’re going to have to get the checkbook out to buy a ton of bandwidth for the queues. And we need to do it NOW since we are losing 90% of the data every 4 hours. You guys didn’t do that, did you? Although queues in practice typically operate with little lag, make sure the origination time of the data is tracked and not just the time the data was pulled off the queue. It can be tempting to just capture the time the message was processed to save space but that can cause problems for your analytics. Why is this important for analytics? If you only have the date and time the message was received by the subscribing server, it may not be as close as you think to the time the message was generated at the originating device. If there are recurring problems with message queues, the spread in time difference would ebb and flow without you being aware of it. You will be using time values extensively in predictive modeling. If the time values are sometimes accurate and sometimes off, the models will have a harder time finding predictive value in your data. Your potential revenue from repurposing the data can also be affected. Customers are unlikely to pay for a service tracking event times for them if it is not always accurate. There is a simple solution. Make sure the time the device sends the data is tracked along with the time the data is received. You can monitor delivery times to diagnose issues and keep a close eye on information lag times. For example, if you notice the delivery time steadily increases just before you get a data loss, it is probably the message queue filling up. If there is no change in delivery time before a loss, it is unlikely to be the queue. Another benefit to using the cloud is (virtually) unlimited queue sizes when use a managed queue service. This makes the situation described much less likely to occur. Distributed computing Also called cluster computing, distributed computing refers to spreading processes across multiple servers using frameworks that abstract the coordination of each individual server. The frameworks make it appear as if you are using one unified system. Under the covers, it could be a few servers (called nodes) to thousands. The framework handles that orchestration for you. Avoid containing analytics to one server The advantage to this for IoT analytics is in scale. You can add resources by adding nodes to the cluster, no change to the analytics code is required. Try and avoid containing analytics to one server (with a few exceptions). This puts a ceiling on scale. When to use distributed and when to use one server There is a complexity cost to distributed computing though. It is not as simple as single server analytics. Even though the frameworks handle a lot of the complexity for you, you still have to think and design your analytics to work across multiple nodes. Some guidelines on when to keep it simple on one server: There is not much need for scale: Your analytics needs little change even if the number of IoT devices and data explodes. For example, the analytics runs a forecast on data already summarized by month. The volume of devices makes little difference in that case. Small data instead of big data: The analytics runs on a small subset of data without much impact from data size. Analytics on random samples is an example. Resource needs are minimal: Even at orders of magnitude more data, you are unlikely to need more than what is available with a standard server. In that case, keep it simple. Assuming change is constant The world of IoT analytics moves quickly. The analytics you create today will change many times over as you get feedback on results and adapt to changing business conditions. Your analytics processes will need to change. Assume this will happen continuously and design for change. That brings us to the concept of continuous delivery. Continuous delivery is a concept from software development. It automates the release of code into production. The idea is to make change a regular process. Bring this concept into your analytics by keeping a set of simultaneous copies that you use to progress through three stages: Development: Keep a copy of your analytics for improving and trying out new things. Test: When ready, merge your improvements into this copy where the functionality stays the same but it is repeatedly tested. The testing ensures it is working as intended. Keeping a separate copy for test allows development to continue on other functionality. Master: This is the copy that goes into production. When you merge things from test to the Master copy, it is the same as putting it into live use. Cloud providers often have a continuous delivery service that can make this process simpler. For any software developer readers out there, this is a simplification of the Git Flow method, which is a little outside the scope of this article. If the author can drop a suggestion, it is worth some additional research to learn Git Flow and apply it to your analytics development in the cloud. Leverage managed services Cloud infrastructure providers, like AWS and Microsoft Azure, offer services for things like message queues, big data storage, and machine learning processing. The services handle the underlying resource needs like server and storage provisioning and also network requirements. You do not have to worry about how this happens under the hood and it scales as big as you need it. They also manage global distribution of services to ensure low latency. The following image shows the AWS regional data center locations combined with the underwater internet cabling. AWS Regional Data Center Locations and Underwater Internet Cables. Source: http://turnkeylinux.github.io/aws-datacenters/ This reduces the amount of things you have to worry about for analytics. It allows you to focus more on the business application and less on the technology. That is a good thing and you should take advantage of it. An example of a managed service is Amazon Simple Queue Service (SQS). SQS is a message queue where the underlying server, storage, and compute needs is managed automatically by AWS systems. You only need to setup and configure it which takes just a few minutes. Summary In this article, we reviewed what is meant by elastic analytics and the advantages to using cloud infrastructure for IoT analytics. Designing for scale was discussed along with distributed computing. The two main cloud providers were introduced, Amazon Web Services and Microsoft Azure. We also reviewed a purpose built software platform, ThingWorx, made for IoT devices, communications, and analysis. Resources for Article:  Further resources on this subject: Building Voice Technology on IoT Projects [article] IoT and Decision Science [article] Introducing IoT with Particle's Photon and Electron [article]
Read more
  • 0
  • 0
  • 26641

article-image-backpropagation-algorithm
Packt
08 Jun 2017
11 min read
Save for later

Backpropagation Algorithm

Packt
08 Jun 2017
11 min read
In this article by Gianmario Spacagna, Daniel Slater, Phuong Vo.T.H, and Valentino Zocca, the authors of the book Python Deep Learning, we will learnthe Backpropagation algorithmas it is one of the most important topics for multi-layer feed-forward neural networks. (For more resources related to this topic, see here.) Propagating the error back from last to first layer, hence the name Backpropagation. Backpropagation is one of the most difficult algorithms to understand at first, but all is needed is some knowledge of basic differential calculus and the chain rule. For a deep neural network the algorithm to set the weights is called the Backpropagation algorithm. The Backpropagation algorithm We have seen how neural networks can map inputs onto determined outputs, depending on fixed weights. Once the architecture of the neural network has been defined (feed-forward, number of hidden layers, number of neurons per layer), and once the activity function for each neuron has been chosen, we will need to set the weights that in turn will define the internal states for each neuron in the network. We will see how to do that for a 1-layer network and then how to extend it to a deep feed-forward network. For a deep neural network the algorithm to set the weights is called the Backpropagation algorithm, and we will discuss and explain this algorithm for most of this section as it is one of the most important topics for multilayer feed-forward neural networks. First, however, we will quickly discuss this for 1-layer neural networks. The general concept we need to understand is the following: every neural network is an approximation of a function, therefore each neural network will not be equal to the desired function, instead it will differ by some value. This value is called the error and the aim is to minimize this error. Since the error is a function of the weights in the neural network, we want to minimize the error with respect to the weights. The error function is a function of many weights, it is therefore a function of many variables. Mathematically, the set of points where this function is zero represents therefore a hypersurface and to find a minimum on this surface we want to pick a point and then follow a curve in the direction of the minimum. Linear regression To simplify things we are going to introduce matrix notation. Let x be the input, we can think of x as a vector. In the case of linear regression we are going to consider a single output neuron y, the set of weights w is therefore a vector of dimension the same as the dimension of x. The activation value is then defined as the inner product <x, w>. Let's say that for each input value x we want to output a target value t, while for each x the neural network will output a value y defined by the activity function chosen, in this case the absolute value of the difference (y-t) represents the difference between the predicted value and the actual value for the specific input example x. If we have m input values xi, each of them will have a target value ti. In this case we calculate the error using the mean squared error , where each yi is a function of w. The error is therefore a function of w and it is usually denoted with J(w). As mentioned above, this represents a hypersurface of dimension equal to the dimension of w (we are implicitly also considering the bias), and for each wj we need to find a curve that will lead towards the minimum of the surface. The direction in which a curve increases in a certain direction is given by its derivative with respect to that direction, in this case by: And in order to move towards the minimum we need to move in the opposite direction set by  for each wj. Let's calculate: If , then  and therefore: The notation can sometimes be confusing, especially the first time one sees it. The input is given by vectors xi, where the superscript indicated the ith example. Since x and w are vectors, the subscript indicates the jth coordinate of the vector. yi then represents the output of the neural network given the input xi, while ti represents the target, that is, the desired value corresponding to the input xi. In order to move towards the minimum, we need to move each weight in the direction of its derivative by a small amount l, called the learning rate, typically much smaller than 1, (say 0.1 or smaller). We can therefore drop the 2 in the derivative and incorporate it in the learning rate, to get the update rule therefore given by: or, more in general, we can write the update rule in matrix form as: where ∇ represents the vector of partial derivatives. This process is what is often called gradient descent. One last note, the update can be done after having calculated all the input vectors, however, in some cases, the weights could be updated after each example or after a defined preset number of examples. Logistic regression In logistic regression, the output is not continuous; rather it is defined as a set of classes. In this case, the activation function is not going to be the identity function like before, rather we are going to use the logistic sigmoid function. The logistic sigmoid function, as we have seen before, outputs a real value in (0,1) and therefore it can be interpreted as a probability function, and that is why it can work so well in a 2-class classification problem. In this case, the target can be one of two classes, and the output represents the probability that it be one of those two classes (say t=1).Let’s denote with σ(a), with a the activation value,the logistic sigmoid function, therefore, for each examplex, the probability that the output be the class y, given the weights w, is: We can write that equation more succinctly as: and, since for each sample xi the probabilities are independent, we have that the global probability is: If we take the natural log of the above equation (to turn products into sums), we get: The object is now to maximize this log to obtain the highest probability of predicting the correct results. Usually, this is obtained, as in the previous case, by using gradient descent to minimize the cost function defined by. As before, we calculate the derivative of the cost function with respect to the weights wj to obtain: In general, in case of a multi-class output t, with t a vector (t1, …, tn), we can generalize this equation using J (w) = −log(P( y x,w))= Ei,j ti j, log ( (di)) that brings to the update equation for the weights: This is similar to the update rule we have seen for linear regression. Backpropagation In the case of 1-layer, weight-adjustment was easy, as we could use linear or logistic regression and adjust the weights simultaneously to get a smaller error (minimizing the cost function). For multi-layer neural networks we can use a similar argument for the weights used to connect the last hidden layer to the output layer, as we know what we would like the output layer to be, but we cannot do the same for the hidden layers, as, a priori, we do not know what the values for the neurons in the hidden layers ought to be. What we do, instead, is calculate the error in the last hidden layer and estimate what it would be in the previous layer, propagating the error back from last to first layer, hence the name Backpropagation. Backpropagation is one of the most difficult algorithms to understand at first, but all is needed is some knowledge of basic differential calculus and the chain rule. Let's introduce some notation first. We denote with Jthe cost (error), with y the activity function that is defined on the activation value a (for example y could be the logistic sigmoid), which is a function of the weights w and the input x. Let's also define wi,j the weight between the ith input value and the jth output. Here we define input and output more generically than for 1-layer network, if wi,j connects a pair of successive layers in a feed-forward network, we denote as input the neurons on the first of the two successive layers, and output the neurons on the second of the two successive layers. In order not to make the notation too heavy, and have to denote on which layer each neuron is, we assume that the ith input yi is always in the layer preceding the layer containing the jth output yj The letter y is used to both denote an input and the activity function, and we can easily infer which one we mean by the contest. We also use subscripts i and jwhere we always have ibelonging to the layer preceding the layer containing the element with subscript j. We also use subscripts i and j, where we always have the element with subscript i belonging to the layer preceding the layer containing the element with subscript j. In this example, layer 1 represents the input, and layer 2 the output Using this notation, and the chain-rule for derivatives, for the last layer of our neural network we can write: Since we know that , we have: If y is the logistic sigmoid defined above, we get the same result we have already calculated at the end of the previous section, since we know the cost function and we can calculate all derivatives. For the previous layers the same formula holds: Since we know that and we know that  is the derivative of the activity function that we can calculate, all we need to calculate is the derivative . Let's notice that this is the derivative of the error with respect to the activation function in the second layer, and, if we can calculate this derivative for the last layer, and have a formula that allows us to calculate the derivative for one layer assuming we can calculate the derivative for the next, we can calculate all the derivatives starting from the last layer and move backwards. Let us notice that, as we defined the yj, they are the activation values for the neurons in the second layer, but they are also the activity functions, therefore functions of the activation values in the first layer. Therefore, applying the chain rule, we have: and once again we can calculate both and, so , once we knowwe can calculate, and since we can calculate for the last layer, we can move backward and calculate for any layer and therefore  for any layer. Summarizing, if we have a sequence of layers where: We then have these two fundamental equations, where the summation in the second equation should read as the sum over all the outgoing connections fromyj to any neuron yk in the successive layer: By using these two equations we can calculate the derivatives for the cost with respect to each layer. If we set ,   represents the variation of the cost with respect to the activation value, and we can think of as the error at the neuron yj. We can then rewrite as: which implies that . These two equations give an alternate way of seeing Backpropagation, as the variation of the cost with respect to the activation value, and provide a formula to calculate this variation for any layer once we know the variation for the following layer: We can also combine these equations and show that: The Backpropagation algorithm for updating the weights is then given on each layer by: In the last section we will provide a code example that will help understand and apply these concepts and formulas. Summary At the end of this articlewe learnt the post neural networks architecture phaseand the use of the Backpropagation algorithm and we saw see how we can stack many layers to create and use deep feed-forward neural networks, and how a neural network can have many layers, and why inner (hidden) layers are important. Resources for Article: Further resources on this subject: Basics of Jupyter Notebook and Python [article] Jupyter and Python Scripting [article] Getting Started with Python Packages [article]
Read more
  • 0
  • 0
  • 48284

article-image-ionic-components
Packt
08 Jun 2017
16 min read
Save for later

Ionic Components

Packt
08 Jun 2017
16 min read
In this article by Gaurav Saini the authors of the book Hybrid Mobile Development with Ionic, we will learn following topics: Building vPlanet Commerce Ionic 2 components (For more resources related to this topic, see here.) Building vPlanet Commerce The vPlanet Commerce app is an e-commerce app which will demonstrate various Ionic components integrated inside the application and also some third party components build by the community. Let’s start by creating the application from scratch using sidemenu template: You now have the basic application ready based on sidemenu template, next immediate step I took if to take reference from ionic-conference-app for building initial components of the application such aswalkthrough. Let’s create a walkthrough component via CLI generate command: $ ionic g page walkthrough As, we get started with the walkthrough component we need to add logic to show walkthrough component only the first time when user installs the application: // src/app/app.component.ts // Check if the user has already seen the walkthrough this.storage.get('hasSeenWalkThrough').then((hasSeenWalkThrough) => { if (hasSeenWalkThrough) { this.rootPage = HomePage; } else { this.rootPage = WalkThroughPage; } this.platformReady(); }) So, we store a boolean value while checking if user has seen walkthrough first time or not. Another important thing we did create Events for login and logout, so that when user logs into the application and we can update Menu items accordingly or any other data manipulation to be done: // src/app/app.component.ts export interface PageInterface { title: string; component: any; icon: string; logsOut?: boolean; index?: number; tabComponent?: any; } export class vPlanetApp { loggedInPages: PageInterface[] = [ { title: 'account', component: AccountPage, icon: 'person' }, { title: 'logout', component: HomePage, icon: 'log-out', logsOut: true } ]; loggedOutPages: PageInterface[] = [ { title: 'login', component: LoginPage, icon: 'log-in' }, { title: 'signup', component: SignupPage, icon: 'person-add' } ]; listenToLoginEvents() { this.events.subscribe('user:login', () => { this.enableMenu(true); }); this.events.subscribe('user:logout', () => { this.enableMenu(false); }); } enableMenu(loggedIn: boolean) { this.menu.enable(loggedIn, 'loggedInMenu'); this.menu.enable(!loggedIn, 'loggedOutMenu'); } // For changing color of Active Menu isActive(page: PageInterface) { if (this.nav.getActive() && this.nav.getActive().component === page.component) { return 'primary'; } return; } } Next we have inside our app.html we have multiple <ion-menu> items depending upon whether user is loggedin or logout: // src/app/app.html<!-- logged out menu --> <ion-menu id="loggedOutMenu" [content]="content"> <ion-header> <ion-toolbar> <ion-title>{{'menu' | translate}}</ion-title> </ion-toolbar> </ion-header> <ion-content class="outer-content"> <ion-list> <ion-list-header> {{'navigate' | translate}} </ion-list-header> <button ion-item menuClose *ngFor="let p of appPages" (click)="openPage(p)"> <ion-icon item-left [name]="p.icon" [color]="isActive(p)"></ion-icon> {{ p.title | translate }} </button> </ion-list> <ion-list> <ion-list-header> {{'account' | translate}} </ion-list-header> <button ion-item menuClose *ngFor="let p of loggedOutPages" (click)="openPage(p)"> <ion-icon item-left [name]="p.icon" [color]="isActive(p)"></ion-icon> {{ p.title | translate }} </button> <button ion-item menuClose *ngFor="let p of otherPages" (click)="openPage(p)"> <ion-icon item-left [name]="p.icon" [color]="isActive(p)"></ion-icon> {{ p.title | translate }} </button> </ion-list> </ion-content> </ion-menu> <!-- logged in menu --> <ion-menu id="loggedInMenu" [content]="content"> <ion-header> <ion-toolbar> <ion-title>Menu</ion-title> </ion-toolbar> </ion-header> <ion-content class="outer-content"> <ion-list> <ion-list-header> {{'navigate' | translate}} </ion-list-header> <button ion-item menuClose *ngFor="let p of appPages" (click)="openPage(p)"> <ion-icon item-left [name]="p.icon" [color]="isActive(p)"></ion-icon> {{ p.title | translate }} </button> </ion-list> <ion-list> <ion-list-header> {{'account' | translate}} </ion-list-header> <button ion-item menuClose *ngFor="let p of loggedInPages" (click)="openPage(p)"> <ion-icon item-left [name]="p.icon" [color]="isActive(p)"></ion-icon> {{ p.title | translate }} </button> <button ion-item menuClose *ngFor="let p of otherPages" (click)="openPage(p)"> <ion-icon item-left [name]="p.icon" [color]="isActive(p)"></ion-icon> {{ p.title | translate }} </button> </ion-list> </ion-content> </ion-menu> As, our app start mainly from app.html so we declare rootPage here: <!-- main navigation --> <ion-nav [root]="rootPage" #content swipeBackEnabled="false"></ion-nav> Let’s now look into what all pages, services, and filter we will be having inside our app. Rather than mentioning it as a bullet list, the best way to know this is going through app.module.ts file which has all the declarations, imports, entryComponents and providers. // src/app/app.modules.ts import { NgModule, ErrorHandler } from '@angular/core'; import { IonicApp, IonicModule, IonicErrorHandler } from 'ionic-angular'; import { TranslateModule, TranslateLoader, TranslateStaticLoader } from 'ng2-translate/ng2-translate'; import { Http } from '@angular/http'; import { CloudSettings, CloudModule } from '@ionic/cloud-angular'; import { Storage } from '@ionic/storage'; import { vPlanetApp } from './app.component'; import { AboutPage } from '../pages/about/about'; import { PopoverPage } from '../pages/popover/popover'; import { AccountPage } from '../pages/account/account'; import { LoginPage } from '../pages/login/login'; import { SignupPage } from '../pages/signup/signup'; import { WalkThroughPage } from '../pages/walkthrough/walkthrough'; import { HomePage } from '../pages/home/home'; import { CategoriesPage } from '../pages/categories/categories'; import { ProductsPage } from '../pages/products/products'; import { ProductDetailPage } from '../pages/product-detail/product-detail'; import { WishlistPage } from '../pages/wishlist/wishlist'; import { ShowcartPage } from '../pages/showcart/showcart'; import { CheckoutPage } from '../pages/checkout/checkout'; import { ProductsFilterPage } from '../pages/products-filter/products-filter'; import { SupportPage } from '../pages/support/support'; import { SettingsPage } from '../pages/settings/settings'; import { SearchPage } from '../pages/search/search'; import { UserService } from '../providers/user-service'; import { DataService } from '../providers/data-service'; import { OrdinalPipe } from '../filters/ordinal'; // 3rd party modules import { Ionic2RatingModule } from 'ionic2-rating'; export function createTranslateLoader(http: Http) { return new TranslateStaticLoader(http, './assets/i18n', '.json'); } // Configure database priority export function provideStorage() { return new Storage(['sqlite', 'indexeddb', 'localstorage'], { name: 'vplanet' }) } const cloudSettings: CloudSettings = { 'core': { 'app_id': 'f8fec798' } }; @NgModule({ declarations: [ vPlanetApp, AboutPage, AccountPage, LoginPage, PopoverPage, SignupPage, WalkThroughPage, HomePage, CategoriesPage, ProductsPage, ProductsFilterPage, ProductDetailPage, SearchPage, WishlistPage, ShowcartPage, CheckoutPage, SettingsPage, SupportPage, OrdinalPipe, ], imports: [ IonicModule.forRoot(vPlanetApp), Ionic2RatingModule, TranslateModule.forRoot({ provide: TranslateLoader, useFactory: createTranslateLoader, deps: [Http] }), CloudModule.forRoot(cloudSettings) ], bootstrap: [IonicApp], entryComponents: [ vPlanetApp, AboutPage, AccountPage, LoginPage, PopoverPage, SignupPage, WalkThroughPage, HomePage, CategoriesPage, ProductsPage, ProductsFilterPage, ProductDetailPage, SearchPage, WishlistPage, ShowcartPage, CheckoutPage, SettingsPage, SupportPage ], providers: [ {provide: ErrorHandler, useClass: IonicErrorHandler}, { provide: Storage, useFactory: provideStorage }, UserService, DataService ] }) export class AppModule {} Ionic components There are many Ionic JavaScript components which we can effectively use while building our application. What's best is to look around for features we will be needing in our application. Let’s get started with Home page of our e-commerce application which will be having a image slider having banners on it. Slides Slides component is multi-section container which can be used in multiple scenarios same astutorial view or banner slider. <ion-slides> component have multiple <ion-slide> elements which can be dragged or swipped left/right. Slides have multiple configuration options available which can be passed in the ion-slides such as autoplay, pager, direction: vertical/horizontal, initialSlide and speed. Using slides is really simple as we just have to include it inside our home.html, no dependency is required for this to be included in the home.ts file: <ion-slides pager #adSlider (ionSlideDidChange)="logLenth()" style="height: 250px"> <ion-slide *ngFor="let banner of banners"> <img [src]="banner"> </ion-slide> </ion-slides> // Defining banners image path export class HomePage { products: any; banners: String[]; constructor() { this.banners = [ 'assets/img/banner-1.webp', 'assets/img/banner-2.webp', 'assets/img/banner-3.webp' ] } } Lists Lists are one of the most used components in many applications. Inside lists we can display rows of information. We will be using lists multiple times inside our application such ason categories page where we are showing multiple sub-categories: // src/pages/categories/categories.html <ion-content class="categories"> <ion-list-header *ngIf="!categoryList">Fetching Categories ....</ion-list-header> <ion-list *ngFor="let cat of categoryList"> <ion-list-header>{{cat.name}}</ion-list-header> <ion-item *ngFor="let subCat of cat.child"> <ion-avatar item-left> <img [src]="subCat.image"> </ion-avatar> <h2>{{subCat.name}}</h2> <p>{{subCat.description}}</p> <button ion-button clear item-right (click)="goToProducts(subCat.id)">View</button> </ion-item> </ion-list> </ion-content> Loading and toast Loading component can be used to indicate some activity while blocking any user interactions. One of the most common cases of using loading component is HTTP/ calls to the server, as we know  it takes time to fetch data from server, till then for good user experience we can show some content showing Loading .. or Login wait .. for login pages. Toast is a small pop-up which provides feedback, usually used when some action  is performed by the user. Ionic 2 now provides toast component as part of its library, previously we have to use native Cordova plugin for toasts which in either case can now be used also. Loading and toast component both have a method create. We have to provide options  while creating these components: // src/pages/login/login.ts import { Component } from '@angular/core'; import { NgForm } from '@angular/forms'; import { NavController, LoadingController, ToastController, Events } from 'ionic-angular'; import { SignupPage } from '../signup/signup'; import { HomePage } from '../home/home'; import { Auth, IDetailedError } from '@ionic/cloud-angular'; import { UserService } from '../../providers/user-service'; @Component({ selector: 'page-user', templateUrl: 'login.html' }) export class LoginPage { login: {email?: string, password?: string} = {}; submitted = false; constructor(public navCtrl: NavController, public loadingCtrl: LoadingController, public auth: Auth, public userService: UserService, public toastCtrl: ToastController, public events: Events) { } onLogin(form: NgForm) { this.submitted = true; if (form.valid) { // start Loader let loading = this.loadingCtrl.create({ content: "Login wait...", duration: 20 }); loading.present(); this.auth.login('basic', this.login).then((result) => { // user is now registered this.navCtrl.setRoot(HomePage); this.events.publish('user:login'); loading.dismiss(); this.showToast(undefined); }, (err: IDetailedError<string[]>) => { console.log(err); loading.dismiss(); this.showToast(err) }); } } showToast(response_message:any) { let toast = this.toastCtrl.create({ message: (response_message ? response_message : "Log In Successfully"), duration: 1500 }); toast.present(); } onSignup() { this.navCtrl.push(SignupPage); } } As, you can see from the previouscode creating a loader and toast is almost similar at code level. The options provided while creating are also similar, we have used loader here while login and toast after that to show the desired message. Setting duration option is good to use, as in case loader is dismissed or not handled properly in code then we will block the user for any further interactions on app. In HTTP calls to server we might get connection issues or failure cases, in that scenario it may end up blocking users. Tabs versussegments Tabs are easiest way to switch between views and organise content at higher level. On the other hand segment is a group of button and can be treated as a local  switch tabs inside a particular component mainly used as a filter. With tabs we can build quick access bar in the footer where we can place Menu options such as Home, Favorites, and Cart. This way we can have one click access to these pages or components. On the other hand we can use segments inside the Account component and divide the data displayed in three segments profile, orders and wallet: // src/pages/account/account.html <ion-header> <ion-navbar> <button menuToggle> <ion-icon name="menu"></ion-icon> </button> <ion-title>Account</ion-title> </ion-navbar> <ion-toolbar [color]="isAndroid ? 'primary' : 'light'" no-border-top> <ion-segment [(ngModel)]="account" [color]="isAndroid ? 'light' : 'primary'"> <ion-segment-button value="profile"> Profile </ion-segment-button> <ion-segment-button value="orders"> Orders </ion-segment-button> <ion-segment-button value="wallet"> Wallet </ion-segment-button> </ion-segment> </ion-toolbar> </ion-header> <ion-content class="outer-content"> <div [ngSwitch]="account"> <div padding-top text-center *ngSwitchCase="'profile'" > <img src="http://www.gravatar.com/avatar?d=mm&s=140"> <h2>{{username}}</h2> <ion-list inset> <button ion-item (click)="updatePicture()">Update Picture</button> <button ion-item (click)="changePassword()">Change Password</button> <button ion-item (click)="logout()">Logout</button> </ion-list> </div> <div padding-top text-center *ngSwitchCase="'orders'" > // Order List data to be shown here </div> <div padding-top text-center *ngSwitchCase="'wallet'"> // Wallet statement and transaction here. </div> </div> </ion-content> This is how we define a segment in Ionic, we don’t need to define anything inside the typescript file for this component. On the other hand with tabs we have to assign a component for  each tab and also can access its methods via Tab instance. Just to mention,  we haven’t used tabs inside our e-commerce application as we are using side menu. One good example will be to look in ionic-conference-app (https://github.com/driftyco/ionic-conference-app) you will find sidemenu and tabs both in single application: / // We currently don’t have Tabs component inside our e-commerce application // Below is sample code about how we can integrate it. <ion-tabs #showTabs tabsPlacement="top" tabsLayout="icon-top" color="primary"> <ion-tab [root]="Home"></ion-tab> <ion-tab [root]="Wishlist"></ion-tab> <ion-tab [root]="Cart"></ion-tab> </ion-tabs> import { HomePage } from '../pages/home/home'; import { WishlistPage } from '../pages/wishlist/wishlist'; import { ShowcartPage } from '../pages/showcart/showcart'; export class TabsPage { @ViewChild('showTabs') tabRef: Tabs; // this tells the tabs component which Pages // should be each tab's root Page Home = HomePage; Wishlist = WishlistPage; Cart = ShowcartPage; constructor() { } // We can access multiple methods via Tabs instance // select(TabOrIndex), previousTab(trimHistory), getByIndex(index) // Here we will console the currently selected Tab. ionViewDidEnter() { console.log(this.tabRef.getSelected()); } } Properties can be checked in the documentation (https://ionicframework.com/docs/v2/api/components/tabs/Tabs/) as, there are many properties available for tabs, like mode, color, tabsPlacement and tabsLayout. Similarly we can configure some tabs properties at Config level also, you will find here what all properties you can configure globally or for specific platform. (https://ionicframework.com/docs/v2/api/config/Config/). Alerts Alerts are the components provided in Ionic for showing trigger alert, confirm, prompts or some specific actions. AlertController can be imported from ionic-angular which allow us to programmatically create and show alerts inside the application. One thing to note here is these are JavaScript pop-up not the native platform pop-up. There is a Cordova plugin cordova-plugin-dialogs (https://ionicframework.com/docs/v2/native/dialogs/) which you can use if native dialog UI elements are required. Currently five types of alerts we can show in Ionic app basic alert, prompt alert, confirmation alert, radio and checkbox alerts: // A radio alert inside src/pages/products/products.html for sorting products <ion-buttons> <button ion-button full clear (click)="sortBy()"> <ion-icon name="menu"></ion-icon>Sort </button> </ion-buttons> // onClick we call sortBy method // src/pages/products/products.ts import { NavController, PopoverController, ModalController, AlertController } from 'ionic-angular'; export class ProductsPage { constructor( public alertCtrl: AlertController ) { sortBy() { let alert = this.alertCtrl.create(); alert.setTitle('Sort Options'); alert.addInput({ type: 'radio', label: 'Relevance', value: 'relevance', checked: true }); alert.addInput({ type: 'radio', label: 'Popularity', value: 'popular' }); alert.addInput({ type: 'radio', label: 'Low to High', value: 'lth' }); alert.addInput({ type: 'radio', label: 'High to Low', value: 'htl' }); alert.addInput({ type: 'radio', label: 'Newest First', value: 'newest' }); alert.addButton('Cancel'); alert.addButton({ text: 'OK', handler: data => { console.log(data); // Here we can call server APIs with sorted data // using the data which user applied. } }); alert.present().then(() => { // Here we place any function that // need to be called as the alert in opened. }); } } Cancel and OK buttons. We have used this here for sorting the products according to relevance price or other sorting values. We can prepare custom alerts also, where we can mention multiple options. Same as in previous example we have five radio options, similarly we can even add a text input box for taking some inputs and submit it. Other than this, while creating alerts remember that there are alert, input and button options properties for all the alerts present in the AlertController component.(https://ionicframework.com/docs/v2/api/components/alert/AlertController/). Some alert options: title:// string: Title of the alert. subTitle:// string(optional): Sub-title of the popup. Message:// string: Message for the alert cssClass:// string: Custom CSS class name inputs:// array: Set of inputs for alert. Buttons:// array(optional): Array of buttons Cards and badges Cards are one of the important component used more often in mobile and web applications. The reason behind cards are so popular because its a great way to organize information and get the users access to quantity of information on smaller screens also. Cards are really flexible and responsive due to all these reasons they are adopted very quickly by developers and companies. We will also be using cards inside our application on home page itself for showing popular products. Let’s see what all different types of cards Ionic provides in its library: Basic cards Cards with header and Footer Cards lists Cards images Background cards Social and map cards Social and map cards are advanced cards, which is build with custom CSS. We can develop similar advance card also. // src/pages/home/home.html <ion-card> <img [src]="prdt.imageUrl"/> <ion-card-content> <ion-card-title no-padding> {{prdt.productName}} </ion-card-title> <ion-row no-padding class="center"> <ion-col> <b>{{prdt.price | currency }} &nbsp; </b><span class="dis count">{{prdt.listPrice | currency}}</span> </ion-col> </ion-row> </ion-card-content> </ion-card> We have used here image card with a image on top and below we have favorite and view button icons. Similarly, we can use different types of cards where ever its required. Also, at the same time we can customize our cards and mix two types of card using their specific CSS classes or elements. Badges are small component used to show small information, for example showing number of items in cart above the cart icon. We have used it in our e-commerce application for showing the ratings of product. <ion-badge width="25">4.1</ion-badge> Summary In this article we have learned, building vPlanet Commerce and Ionic components. Resources for Article: Further resources on this subject: Lync 2013 Hybrid and Lync Online [article] Optimizing JavaScript for iOS Hybrid Apps [article] Creating Mobile Dashboards [article]
Read more
  • 0
  • 0
  • 36198

article-image-erasure-coding-cold-storage
Packt
07 Jun 2017
20 min read
Save for later

Erasure coding for cold storage

Packt
07 Jun 2017
20 min read
In this article by Nick Frisk, author of the book Mastering Ceph, we will get acquainted with erasure coding. Ceph's default replication level provides excellent protection against data loss by storing three copies of your data on different OSD's. The chance of losing all three disks that contain the same objects within the period that it takes Ceph to rebuild from a failed disk, is verging on the extreme edge of probability. However, storing 3 copies of data vastly increases both the purchase cost of the hardware but also associated operational costs such as power and cooling. Furthermore, storing copies also means that for every client write, the backend storage must write three times the amount of data. In some scenarios, either of these drawbacks may mean that Ceph is not a viable option. Erasure codes are designed to offer a solution. Much like how RAID 5 and 6 offer increased usable storage capacity over RAID1, erasure coding allows Ceph to provide more usable storage from the same raw capacity. However also like the parity based RAID levels, erasure coding brings its own set of disadvantages. (For more resources related to this topic, see here.) In this article you will learn: What is erasure coding and how does it work Details around Ceph's implementation of erasure coding How to create and tune an erasure coded RADOS pool A look into the future features of erasure coding with Ceph Kraken release What is erasure coding Erasure coding allows Ceph to achieve either greater usable storage capacity or increase resilience to disk failure for the same number of disks versus the standard replica method. Erasure coding achieves this by splitting up the object into a number of parts and then also calculating a type of Cyclic Redundancy Check, the Erasure code, and then storing the results in one or more extra parts. Each part is then stored on a separate OSD. These parts are referred to as k and m chunks, where k refers to the number of data shards and m refers to the number of erasure code shards. As in RAID, these can often be expressed in the form k+m or 4+2 for example. In the event of an OSD failure which contains an objects shard which isone of the calculated erasure codes, data is read from the remaining OSD's that store data with no impact. However, in the event of an OSD failure which contains the data shards of an object, Ceph can use the erasure codes to mathematically recreate the data from a combination of the remaining data and erasure code shards. k+m The more erasure code shards you have, the more OSD failure's you can tolerate and still successfully read data. Likewise the ratio of k to m shards each object is split into, has a direct effect on the percentage of raw storage that is required for each object. A 3+1 configuration will give you 75% usable capacity, but only allows for a single OSD failure and so would not be recommended. In comparison a three way replica pool, only gives you 33% usable capacity. 4+2 configurations would give you 66% usable capacity and allows for 2 OSD failures. This is probably a good configuration for most people to use. At the other end of the scale a 18+2 would give you 90% usable capacity and still allows for 2 OSD failures. On the surface this sounds like an ideal option, but the greater total number of shards comes at a cost. The higher the number of total shards has a negative impact on performance and also an increased CPU demand. The same 4MB object that would be stored as a whole single object in a replicated pool, is now split into 20 x 200KB chunks, which have to be tracked and written to 20 different OSD's. Spinning disks will exhibit faster bandwidth, measured in MB/s with larger IO sizes, but bandwidth drastically tails off at smaller IO sizes. These smaller shards will generate a large amount of small IO and cause additional load on some clusters. Also its important not to forget that these shards need to be spread across different hosts according to the CRUSH map rules, no shard belonging to the same object can be stored on the same host as another shard from the same object. Some clusters may not have a sufficient number hosts to satisfy this requirement. Reading back from these high chunk pools is also a problem. Unlike in a replica pool where Ceph can read just the requested data from any offset in an object, in an Erasure pool, all shards from all OSD's have to be read before the read request can be satisfied. In the 18+2 example this can massively amplify the amount of required disk read ops and average latency will increase as a result. This behavior is a side effect which tends to only cause a performance impact with pools that use large number of shards. A 4+2 configuration in some instances will get a performance gain compared to a replica pool, from the result of splitting an object into shards.As the data is effectively striped over a number of OSD's, each OSD is having to write less data and there is no secondary and tertiary replica's to write. How does erasure coding work in Ceph As with Replication, Ceph has a concept of a primary OSD, which also exists when using erasure coded pools. The primary OSD has the responsibility of communicating with the client, calculating the erasure shards and sending them out to the remaining OSD's in the Placement Group (PG) set. This is illustrated in the diagram below: If an OSD in the set is down, the primary OSD, can use the remaining data and erasure shards to reconstruct the data, before sending it back to the client. During read operations the primary OSD requests all OSD's in the PG set to send their shards. The primary OSD uses data from the data shards to construct the requested data, the erasure shards are discarded. There is a fast read option that can be enabled on erasure pools, which allows the primary OSD to reconstruct the data from erasure shards if they return quicker than data shards. This can help to lower average latency at the cost of slightly higher CPU usage. The diagram below shows how Ceph reads from an erasure coded pool: The next diagram shows how Ceph reads from an erasure pool, when one of the data shards is unavailable. Data is reconstructed by reversing the erasure algorithm using the remaining data and erasure shards. Algorithms and profiles There are a number of different Erasure plugins you can use to create your erasure coded pool. Jerasure The default erasure plugin in Ceph is the Jerasure plugin, which is a highly optimized open source erasure coding library. The library has a number of different techniques that can be used to calculate the erasure codes. The default is Reed Solomon and provides good performance on modern processors which can accelerate the instructions that the technique uses. Cauchy is another technique in the library, it is a good alternative to Reed Solomon and tends to perform slightly better. As always benchmarks should be conducted before storing any production data on an erasure coded pool to identify which technique best suits your workload. There are also a number of other techniques that can be used, which all have a fixed number of m shards. If you are intending on only having 2 m shards, then they can be a good candidate, as there fixed size means that optimization's are possible lending to increased performance. In general the jerasure profile should be prefer in most cases unless another profile has a major advantage, as it offers well balanced performance and is well tested. ISA The ISA library is designed to work with Intel processors and offers enhanced performance. It too supports both Reed Solomon and Cauchy techniques. LRC One of the disadvantages of using erasure coding in a distributed storage system is that recovery can be very intensive on networking between hosts. As each shard is stored on a separate host, recovery operations require multiple hosts to participate in the process. When the crush topology spans multiple racks, this can put pressure on the inter rack networking links. The LRC erasure plugin, which stands for Local Recovery Codes, adds an additional parity shard which is local to each OSD node. This allows recovery operations to remain local to the node where a OSD has failed and remove the need for nodes to receive data from all other remaining shard holding nodes. However the addition of these local recovery codes does impact the amount of usable storage for a given number of disks. In the event of multiple disk failures, the LRC plugin has to resort to using global recovery as would happen with the jerasure plugin. SHingled Erasure Coding The SHingled Erasure Coding (SHEC) profile is designed with similar goals to the LRC plugin, in that it reduces the networking requirements during recovery. However instead of creating extra parity shards on each node, SHEC shingles the shards across OSD's in an overlapping fashion. The shingle part of the plugin name represents the way the data distribution resembles shingled tiles on a roof of a house. By overlapping the parity shards across OSD's, the SHEC plugin reduces recovery resource requirements for both single and multiple disk failures. Where can I use erasure coding Since the Firefly release of Ceph in 2014, there has been the ability to create a RADOS pool using erasure coding. There is one major thing that you should be aware of, the erasure coding support in RADOS does not allow an object to be partially updated. You can write to an object in an erasure pool, read it back and even overwrite it whole, but you cannot update a partial section of it. This means that erasure coded pools can't be used for RBD and CephFS workloads and is limited to providing pure object storage either via the Rados Gateway or applications written to use librados. The solution at the time was to use the cache tiering ability which was released around the same time, to act as a layer above an erasure coded pools that RBD could be used. In theory this was a great idea, in practice, performance was extremely poor. Every time an object was required to be written to, the whole object first had to be promoted into the cache tier. This act of promotion probably also meant that another object somewhere in the cache pool was evicted. Finally the object now in the cache tier could be written to. This whole process of constantly reading and writing data between the two pools meant that performance was unacceptable unless a very high percentage of the data was idle. During the development cycle of the Kraken release, an initial implementation for support for direct overwrites on n erasure coded pool was introduced. As of the final Kraken release, support is marked as experimental and is expected to be marked as stable in the following release. Testing of this feature will be covered later in this article. Creating an erasure coded pool Let's bring our test cluster up again and switch into SU mode in Linux so we don't have to keep prepending sudo to the front of our commands Erasure coded pools are controlled by the use of erasure profiles, these control how many shards each object is broken up into including the split between data and erasure shards. The profiles also include configuration to determine what erasure code plugin is used to calculate the hashes. The following plugins are available to use <list of plugins> To see a list of the erasure profiles run # cephosd erasure-code-profile ls You can see there is a default profile in a fresh installation of Ceph. Lets see what configuration options it contains # cephosd erasure-code-profile get default The default specifies that it will use the jerasure plugin with the Reed Solomon error correcting codes and will split objects into 2 data shards and 1 erasure shard. This is almost perfect for our test cluster, however for the purpose of this exercise we will create a new profile. # cephosd erasure-code-profile set example_profile k=2 m=1 plugin=jerasure technique=reed_sol_van # cephosd erasure-code-profile ls You can see our new example_profile has been created. Now lets create our erasure coded pool with this profile: # cephosd pool create ecpool 128 128 erasure example_profile The above command instructs Ceph to create a new pool called ecpool with a 128 PG's. It should be an erasure coded pool and should use our "example_profile" we previously created. Lets create an object with a small text string inside it and the prove the data has been stored by reading it back: # echo "I am test data for a test object" | rados --pool ecpool put Test1 – # rados --pool ecpool get Test1 - That proves that the erasure coded pool is working, but it's hardly the most exciting of discoveries. Lets have a look to see if we can see what's happening at a lower level. First, find out what PG is holding the object we just created # cephosd map ecpoolTest1 The result of the above command tells us that the object is stored in PG 3.40 on OSD's1, 2 and 0. In this example Ceph cluster that's pretty obvious as we only have 3 OSD's, but in larger clusters that is a very useful piece of information. We can now look at the folder structure of the OSD's and see how the object has been split. The PG's will likely be different on your test cluster, so make sure the PG folder structure matches the output of the "cephosd map" command above. ls -l /var/lib/ceph/osd/ceph-2/current/1.40s0_head/ # ls -l /var/lib/ceph/osd/ceph-1/current/1.40s1_head/ # ls -l /var/lib/ceph/osd/ceph-0/current/1.40s2_head/                 total 4 Notice how the PG directory names have been appended with the shard number, replicated pools just have the PG number as their directory name. If you examine the contents of the object files, you will see our text string that we entered into the object when we created it. However due to the small size of the text string, Ceph has padded out the 2nd shard with null characters and the erasure shard hence will contain the same as the first. You can repeat this example with a new object containing larger amounts of text to see how Ceph splits the text into the shards and calculates the erasure code. Overwrites on erasure code pools with Kraken Introduced for the first time in the Kraken release of Cephas an experimental feature, was the ability to allow partial overwrites on erasure coded pools. Partial overwrite support allows RBD volumes to be created on erasure coded pools, making better use of raw capacity of the Ceph cluster. In parity RAID, where a write request doesn't span the entire stripe, a read modify write operation is required. This is needed as the modified data chunks will mean the parity chunk is now incorrect. The RAID controller has to read all the current chunks in the stripe, modify them in memory, calculate the new parity chunk and finally write this back out to the disk. Ceph is also required to perform this read modify write operation, however the distributed model of Ceph increases the complexity of this operation.When the primary OSD for a PG receives a write request that will partially overwrite an existing object, it first works out which shards will be not be fully modified by the request and contacts the relevant OSD's to request a copy of these shards. The primary OSD then combines these received shards with the new data and calculates the erasure shards. Finally the modified shards are sent out to the respective OSD's to be committed. This entire operation needs to conform the other consistency requirements Ceph enforces, this entails the use of temporary objects on the OSD, should a condition arise that Ceph needs to roll back a write operation. This partial overwrite operation, as can be expected, has a performance impact. In general the smaller the write IO's, the greater the apparent impact. The performance impact is a result of the IO path now being longer, requiring more disk IO's and extra network hops. However, it should be noted that due to the striping effect of erasure coded pools, in the scenario where full stripe writes occur, performance will normally exceed that of a replication based pool. This is simply down to there being less write amplification due to the effect of striping. If performance of an Erasure pool is not suitable, consider placing it behind a cache tier made up of a replicated pool. Despite partial overwrite support coming to erasure coded pools in Ceph, not every operation is supported. In order to store RBD data on an erasure coded pool, a replicated pool is still required to hold key metadata about the RBD. This configuration is enabled by using the –data-pool option with the rbd utility. Partial overwrite is also not recommended to be used with Filestore. Filestore lacks several features that partial overwrites on erasure coded pools uses, without these features extremely poor performance is experienced. Demonstration This feature requires the Kraken release or newer of Ceph. If you have deployed your test cluster with the Ansible and the configuration provided, you will be running Ceph Jewel release. The following steps show how to use Ansible to perform a rolling upgrade of your cluster to the Kraken release. We will also enable options to enable experimental options such as bluestore and support for partial overwrites on erasure coded pools. Edit your group_vars/ceph variable file and change the release version from Jewel to Kraken. Also add: ceph_conf_overrides: global: enable_experimental_unrecoverable_data_corrupting_features: "debug_white_box_testing_ec_overwrites bluestore" And to correct a small bug when using Ansible to deploy Ceph Kraken, add: debian_ceph_packages: - ceph - ceph-common - ceph-fuse To the bottom of the file run the following Ansible playbook: ansible-playbook -K infrastructure-playbooks/rolling_update.yml Ansible will prompt you to make sure that you want to carry out the upgrade, once you confirm by entering yes the upgrade process will begin. Once Ansible has finished, all the stages should be successful as shown below: Your cluster has now been upgraded to Kraken and can be confirmed by running ceph -v on one of yours VM's running Ceph. As a result of enabling the experimental options in the configuration file, every time you now run a Ceph command, you will be presented with the following warning. This is designed as a safety warning to stop you running these options in a live environment, as they may cause irreversible data loss. As we are doing this on a test cluster, that is fine to ignore, but should be a stark warning not to run this anywhere near live data. The next command that is required to be run is to enable the experimental flag which allows partial overwrites on erasure coded pools. DO NOT RUN THIS ON PRODUCTION CLUSTERS cephosd pool get ecpooldebug_white_box_testing_ec_overwrites true Double check you still have your erasure pool called ecpool and the default RBD pool # cephosdlspools 0 rbd,1ecpool, And now create the rbd. Notice that the actual RBD header object still has to live on a replica pool, but by providing an additional parameter we can tell Ceph to store data for this RBD on an erasure coded pool. rbd create Test_On_EC --data-pool=ecpool --size=1G The command should return without error and you now have an erasure coded backed RBD image. You should now be able to use this image with any librbd application. Note: Partial overwrites on Erasure pools require Bluestore to operate efficiently. Whilst Filestore will work, performance will be extremely poor. Troubleshooting the 2147483647 error An example of this error is shown below when running the ceph health detail command. If you see 2147483647 listed as one of the OSD's for an erasure coded pool, this normally means that CRUSH was unable to find a sufficient number of OSD's to complete the PG peering process. This is normally due to the number of k+m shards being larger than the number of hosts in the CRUSH topology. However, in some cases this error can still occur even when the number of hosts is equal or greater to the number of shards. In this scenario it's important to understand how CRUSH picks OSD's as candidates for data placement. When CRUSH is used to find a candidate OSD for a PG, it applies the crushmap to find an appropriate location in the crush topology. If the result comes back as the same as a previous selected OSD, Ceph will retry to generate another mapping by passing slightly different values into the crush algorithm. In some cases if there is a similar number of hosts to the number of erasure shards, CRUSH may run out of attempts before it can suitably find correct OSD mappings for all the shards. Newer versions of Ceph has mostly fixed these problems by increasing the CRUSH tunable choose_total_tries. Reproducing the problem In order to aid understanding of the problem in more detail, the following steps will demonstrate how to create an erasure coded profile that will require more shards than our 3 node cluster can support. Firstly, like earlier in the articlecreate a new erasure profile, but modify the k/m parameters to be k=3 m=1: $ cephosd erasure-code-profile set broken_profile k=3 m=1 plugin=jerasure technique=reed_sol_van And now create a pool with it: $ cephosd pool create broken_ecpool 128 128 erasure broken_profile If we look at the output from ceph -s, we will see that the PG's for this new pool are stuck in the creating state. The output of ceph health detail, shows the reason why and we see the 2147483647 error. If you encounter this error and it is a result of your erasure profile being larger than your number of hosts or racks, depending on how you have designed your crushmap. Then the only real solution is to either drop the number of shards, or increase number of hosts. Summary In this article you have learnt what erasure coding is and how it is implemented in Ceph. You should also have an understanding of the different configuration options possible when creating erasure coded pools and their suitability for different types of scenarios and workloads. Resources for Article: Further resources on this subject: Ceph Instant Deployment [article] Working with Ceph Block Device [article] GNU Octave: Data Analysis Examples [article]
Read more
  • 0
  • 0
  • 17450
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introduction-sdn-transformation-legacy-sdn
Packt
07 Jun 2017
23 min read
Save for later

Introduction to SDN - Transformation from legacy to SDN

Packt
07 Jun 2017
23 min read
In this article, by Reza Toghraee, the author of the book, Learning OpenDayLight, we will: What is and what is not SDN Components of a SDN Difference between SDN and overlay The SDN controllers (For more resources related to this topic, see here.) You might have heard about Software-Defined Networking (SDN). If you are in networking industry this is a topic which probably you have studied initially when first time you heard about the SDN.To understand the importance of SDN and SDN controller, let's look at Google. Google silently built its own networking switches and controller called Jupiter. A home grown project which is mostly software driven and supports such massive scale of Google. The SDN base is There is a controller who knows the whole network. OpenDaylight (ODL), is a SDN controller. In other words, it's the central brain of the network. Why we are going towards SDN Everyone who is hearing about SDN, should ask this question that why are we talking about SDN. What problem is it trying to solve? If we look at traditional networking (layer 2, layer 3 with routing protocols such as BGP, OSPF) we are completely dominated by what is so called protocols. These protocols in fact have been very helpful to the industry. They are mostly standard. Different vendor and products can communicate using standard protocols with each other. A Cisco router can establish a BGP session with a Huawei switch or an open source Quagga router can exchange OSPF routes with a Juniper firewall. Routing protocol is a constant standard with solid bases. If you need to override something in your network routing, you have to find a trick to use protocols, even by using a static route. SDN can help us to come out of routing protocol cage, look at different ways to forward traffic. SDN can directly program each switch or even override a route which is installed by routing protocol. There are high-level benefits of using SDN which we explain few of them as follows: An integrated network: We used to have a standalone concept in traditional network. Each switch was managed separately, each switch was running its own routing protocol instance and was processing routing information messages from other neighbors. In SDN, we are migrating to a centralized model, where the SDN controller becomes the single point of configuration of the network, where you will apply the policies and configuration. Scalable layer 2 across layer 3:Having a layer 2 network across multiple layer 3 network is something which all network architects are interested and till date we have been using proprietary methods such as OTV or by using a service provider VPLS service. With SDN, we can create layer 2 networks across multiple switches or layer 3 domains (using VXLAN) and expand the layer 2 networks. In many cloud environment, where the virtual machines are distributed across different hosts in different datacenters, this is a major requirement. Third-party application programmability: This is a very generic term, isn't it? But what I'm referring to is to let other applications communicate with your network. For example,In many new distributed IP storage systems, the IP storage controller has ability to talk to network to provide the best, shortest path to the storage node. With SDN we are letting other applications to control the network. Of course this control has limitation and SDN doesn't allow an application to scrap the whole network. Flexible application based network:In SDN, everything is an application. L2/L3, BGP, VMware Integration, and so on all are applications running in SDN controller. Service chaining:On the fly you add a firewall in the path or a load balancer. This is service insertion. Unified wired and wireless: This is an ideal benefit, to have a controller which supports both wired and wireless network. OpenDaylight is the only controller which supports CAPWAP protocols which allows integration with wireless access points. Components of a SDN A software defined network infrastructure has two main key components: The SDN Controller (only one, could be deployed in a highly available cluster) The SDN enabled switches(multiple switches, mostly in a Clos topology in a datacenter):   SDN controller is the single brain of the SDN domain. In fact, an SDN domain is very similar to a chassis based switch. You can imagine supervisor or management module of a chassis based switch as a SDN controller and rest of the line card and I/O cards as SDN switches. The main difference between a SDN network and a chassis based switch is that you can scale out the SDN with multiple switches, where in a chassis based switch you are limited to the number of slots in that chassis: Controlling the fabric It is very important that you understand the main technologies involved in SDN. These methods are used by SDN controllers to manage and control the SDN network. In general, there are two methods available for controlling the fabric: Direct fabric programming: In this method, SDN controller directly communicates with SDN enabled switches via southbound protocols such as OpenFlow, NETCONF and OVSDB. SDN controller programs each switch member with related information about fabric, and how to forward the traffic. Direct fabric programming is the method used by OpenDaylight. Overlay:In overlay method, SDN controller doesn't rely on programing the network switches and routers. Instead it builds an virtual overlay network on top of existing underlay network. The underlay network can be a L2 or L3 network with traditional network switches and router, just providing IP connectivity. SDN controller uses this platform to build the overlay using encapsulation protocols such as VXLAN and NVGRE. VMware NSX uses overlay technology to build and control the virtual fabric. SDN controllers One of the key fundamentals of SDN is disaggregation. Disaggregation of software and hardware in a network and also disaggregation of control and forwarding planes. SDN controller is the main brain and controller of an SDN environment, it's a strategic control point within the network and responsible for communicating information to: Routers and switches and other network devices behind them. SDN controllers uses APIs or protocols (such as OpenFlow or NETCONF) to communicate with these devices. This communication is known as southbound Upstream switches, routers or applications and the aforementioned business logic (via APIs or protocols). This communication is known as northbound. An example for a northbound communication is a BGP session between a legacy router and SDN controller. If you are familiar with chassis based switches like Cisco Catalyst 6500 or Nexus 7k chassis, you can imagine a SDN network as a chassis, with switches and routers as its I/O line cards and SDN controller as its supervisor or management module. Infact SDN is similar to a very scalable chassis where you don't have any limitation on number of physical slots. SDN controller is similar to role of management module of a chassis based switch and it controls all switches via its southbound protocols and APIs. The following table compares the SDN controller and a chassis based switch:  SDN Controller Chassis based switch Supports any switch hardware Supports only specific switch line cards Can scale out, unlimited number of switches Limited to number of physical slots in the chassis Supports high redundancy by multiple controllers in a cluster Supports dual management redundancy, active standby Communicates with switches via southbound protocols such as OpenFlow, NETCONF, BGP PCEP Use proprietary protocols between management module and line cards Communicates with routers, switches and applications outside of SDN via northbound protocols such as BGP, OSPF and direct API Communicates with other routers and switches outside of chassis via standard protocols such as BGP, OSPF or APIs. The first protocol that popularized the concept behind SDN was OpenFlow. When conceptualized by networking researchers at Stanford back in 2008, it was meant to manipulate the data plane to optimize traffic flows and make adjustments, so the network could quickly adapt to changing requirements. Version 1.0 of the OpenFlow specification was released in December of 2009; it continues to be enhanced under the management of the Open Networking Foundation, which is a user-led organization focused on advancing the development of open standards and the adoption of SDN technologies. OpenFlow protocol was the first protocol that helped in popularizing SDN. OpenFlow is a protocol designed to update the flow tables in a switch. Allowing the SDN controller to access the forwarding table of each member switch or in other words to connect control plane and data plane in SDN world. Back in 2008, OpenFlow conceptualized by networking researchers at Stanford University, the initial use of OpenFlow was to alter the switch forwarding tables to optimize traffic flows and make adjustments, so the network could quickly adapt to changing requirements. After introduction of OpenFlow, NOX introduced as original OpenFlow controller (still there wasn't concept of SDN controller). NOX was providing a high-level API capable of managing and also developing network control applications. Separate applications were required to run on top of NOX to manage the network.NOX was initially developed by Nicira networks (which acquired by VMware, and finally became part of VMware NSX). NOX introduced along with OpenFlow in 2009. NOX was a closed source product but ultimately it was donated to SDN community which led to multiple forks and sub projects out of original NOX. For example, POX is a sub project of NOX which provides Python support. Both NOX and POX were early controllers. NOX appears an inactive development, however POX is still in use by the research community as it is a Python based project and can be easily deployed. POX is hosted at http://github.com/noxrepo/pox NOX apart from being the first OpenFlow or SDN controller also established a programing model which inherited by other subsequent controllers. The model was based on processing of OpenFlow messages, with each incoming OpenFlow message trigger an event that had to be processed individually. This model was simple to implement but not efficient and robust and couldn't scale. Nicira along with NTT and Google started developing ONIX, which was meant to be a more abstract and scalable for large deployments. ONIX became the base for Nicira (the core of VMware NSX or network virtualization platform) also there are rumors that it is also the base for Google WAN controller. ONIX was planned to become open source and donated to community but for some reasons the main contributors decided to not to do it which forced the SDN community to focus on developing other platforms. Started in 2010, a new controller introduced,the Beacon controller and it became one of the most popular controllers. It born with contribution of developers from Stanford University. Beacon is a Java-based open source OpenFlow controller created in 2010. It has been widely used for teaching, research, and as the basis of Floodlight. Beacon had the first built-in web user-interface which was a huge step forward in the market of SDN controllers. Also it provided a easier method to deploy and run compared to NOX. Beacon was an influence for design of later controllers after it, however it was only supporting star topologies which was one of the limitations on this controller. Floodlight was a successful SDN controller which was built as a fork of Beacon. BigSwitch networks is developing Floodlight along with other developers. In 2013, Beacon popularity started to shrink down and Floodlight started to gain popularity. Floodlight had fixed many issues of Beacon and added lots of additional features which made it one of the most feature rich controllers available. It also had a web interface, a Java-based GUI and also could get integrated with OpenStack using quantum plugin. Integration with OpenStack was a big step forward as it could be used to provide networking to a large pool of virtual machines, compute and storage. Floodlight adoption increased by evolution of OpenStack and OpenStack adopters. This gave Floodlight greater popularity and applicability than other controllers that came before. Most of controllers came after Floodlight also supported OpenStack integration. Floodlight is still supported and developed by community and BigSwitch networks, and is a base for BigCloud Fabric (the BigSwitch's commercial SDN controller). There are other open source SDN controllers which introduced such as Trema (ruby-based from NEC), Ryu (supported by NTT), FlowER, LOOM and the recent OpenMUL. The following table shows the current open source SDN controllers:  Active open source SDNcontroller Non-active open source SDN controllers Floodlight Beacon OpenContrail FlowER OpenDaylight NOX LOOM NodeFlow OpenMUL   ONOS   POX   Ryu   Trema     OpenDaylight OpenDaylight started in early 2013, and was originally led by IBM and Cisco. It was a new collaborative open source project. OpenDaylight hosted under Linux Foundation and draw support and interest from many developers and adopters. OpenDaylight is a platform to provide common foundations and a robust array of services for SDN environments. OpenDaylight uses a controller model which supports OpenFlow as well as other southbound protocols. It is the first open source controller capable of employing non-OpenFlow proprietary control protocols which eventually lets OpenDaylight to integrate with modern and multi-vendor networks. The first release of OpenDaylight in February 2014 with code name of Hydrogen, followed by Helium in September 2014. The Helium release was significant because it marked a change in direction for the platform that has influenced the way subsequent controllers have been architected. The main change was in the service abstraction layer, which is the part of the controller platform that resides just above the southbound protocols, such as OpenFlow, isolating them from the northbound side and where the applications reside. Hydrogen used an API-driven Service Abstraction Layer (AD-SAL), which had limitations specifically, it meant the controller needed to know about every type of device in the network AND have an inventory of drivers to support them. Helium introduced a Model-driven service abstraction layer (MD-SAL), which meant the controller didn't have to account for all the types of equipment installed in the network, allowing it to manage a wide range of hardware and southbound protocols. Helium release made the framework much more agile and adaptable to changes in the applications; an application could now request changes to the model, which would be received by the abstraction layer and forwarded to the network devices. The OpenDaylight platform built on this advancement in its third release, Lithium, which was introduced in June of 2015. This release focused on broadening the programmability of the network, enabling organizations to create their own service architectures to deliver dynamic network services in a cloud environment and craft intent-based policies. Lithium release was worked on by more than 400 individuals, and contributions from Big Switch Networks, Cisco, Ericsson, HP, NEC, and so on, making it one of the fastest growing open source projects ever. The fourth release, Beryllium come out in February of 2016 and the most recent fifth release, Boron released in September 2016. Many vendors have built and developed commercial SDN controller solutions based on OpenDaylight. Each product has enhanced or added features to OpenDaylight to have some differentiating factor. The use of OpenDaylight in different vendor products are: A base, but sell a commercial version with additional proprietary functionality—for example: Brocade, Ericsson, Ciena, and so on. Part of their infrastructure in their Network as a Service (or XaaS) offerings—for example: Telstra, IBM, and so on. Elements for use in their solution—for example: ConteXtream (now part of HP) Open Networking Operating System (ONOS), which was open sourced in December 2014 is focused on serving the needs of service providers. It is not as widely adopted as OpenDaylight, ONOS has been finding success and gaining momentum around WAN use cases. ONOS is backed by numerous organizations including AT&T, Cisco, Fujitsu, Ericsson, Ciena, Huawei, NTT, SK Telecom, NEC, and Intel, many of whom are also participants in and supporters of OpenDaylight. Apart from open source SDN controllers, there are many commercial, proprietary controllers available in the market. Products such as VMware NSX, Cisco APIC, BigSwitch Big Cloud Fabric, HP VAN and NEC ProgrammableFlow are example commercial and proprietary products. The following table lists the commercially available controllers and their relationship to OpenDaylight:  ODL-based ODL-friendly Non-ODL based Avaya Cyan (acquired by Ciena) BigSwitch Brocade HP Juniper Ciena NEC Cisco ConteXtream (HP) Nuage Plexxi Coriant   PLUMgrid Ericsson Pluribus Extreme Sonus Huawei (also ships non-ODL controller) VMware NSX Core features of SDN Regardless of an open source or a proprietary SDN platform, there are core features and capabilities which requires the SDN platform to support. These capabilities include: Fabric programmability:Providing the ability to redirect traffic, apply filters to packets (dynamically), and leverage templates to streamline the creation of custom applications. Ensuring northbound APIs allow the control information centralized in the controller available to be changed by SDN applications. This will ensure the controller can dynamically adjust the underlying network to optimize traffic flows to use the least expensive path, take into consideration varying bandwidth constraints, meet quality of service (QoS) requirements. Southbound protocol support:Enabling the controller to communicate to switches and routers and manipulate and optimize how they manage the flow of traffic. Currently OpenFlow is the most standard protocol used between different networking vendors, while there are other southbound protocols that can be used. A SDN platform should support different versions of OpenFlow in order to provide compatibility with different switching equipments. External API support:Ensuring the controller can be used within the varied orchestration and cloud environments such as VMware vSphere, OpenStack, and so on. Using APIs the orchestration platform can communicate with SDN platform in order to publish network policies. For example VMware vSphere shall talk to SDN platform to extend the virtual distributed switches(VDS) from virtual environment to the physical underlay network without any requirement form an network engineer to configure the network. Centralized monitoring and visualization:Since SDN controller has a full visibility over the network, it can offer end-to-end visibility of the network and centralized management to improve overall performance, simplify the identification of issues and accelerate troubleshooting. The SDN controller will be able to discover and present a logical abstraction of all the physical links in the network, also it can discover and present a map of connected devices (MAC addresses) which are related to virtual or physical devices connected to the network. The SDN controller support monitoring protocols, such as syslog, snmp and APIs in order to integrate with third-party management and monitoring systems. Performance: Performance in a SDN environment mainly depends on how fast SDN controller fills the flow tables of SDN enabled switches. Most of SDN controllers pre-populate the flow tables on switches to minimize the delay. When a SDN enabled switch receives a packet which doesn't find a matching entry in its flow table, it sends the packet to the SDN controller in order to find where the packet needs to get forwarded to. A robust SDN solution should ensure that the number of requests form switches are minimum and SDN controller doesn't become a bottleneck in the network. High availability and scalability: Controllers must support high availability clusters to ensure reliability and service continuity in case of failure of a controller. Clustering in SDN controller expands to scalability. A modern SDN platform should support scalability in order to add more controller nodes with load balancing in order to increase the performance and availability. Modern SDN controllers support clustering across multiple different geographical locations. Security:Since all switches communicate with SDN controller, the communication channel needs to be secured to ensure unauthorized devices doesn't compromise the network. SDN controller should secure the southbound channels, use encrypted messaging and mutual authentication to provide access control. Apart from that the SDN controller must implement preventive mechanisms to prevent from denial of services attacks. Also deployment of authorization levels and level controls for multi-tenant SDN platforms is a key requirement. Apart from the aforementioned features SDN controllers are likely to expand their function in future. They may become a network operating system and change the way we used to build networks with hardware, switches, SFPs and gigs of bandwidth. The future will look more software defined, as the silicon and hardware industry has already delivered their promises for high performance networking chips of 40G, 100G. Industry needs more time to digest the new hardware and silicons and refresh the equipment with new gears supporting 10 times the current performance. Current SDN controllers In this section, I'm putting the different SDN controllers in a table. This will help you to understand the current market players in SDN and how OpenDaylight relates to them:  Vendors/product Based on OpenDaylight? Commercial/open source Description Brocade SDN controller Yes Commercial It's a commercial version of OpenDaylight, fully supported and with extra reliable modules. Cisco APIC No Commercial Cisco Application Policy Infrastructure Controller (APIC) is the unifying automation and management point for the Application Centric Infrastructure (ACI) data center fabric. Cisco uses APIC controller and Nexus 9k switches to build the fabric. Cisco uses OpFlex as main southbound protocol. Erricson SDN controller Yes Commercial Ericsson's SDN controller is a commercial (hardened) version OpenDaylight SDN controller. Domain specific control applications that use the SDN controller as platform form the basis of the three commercial products in our SDN controller portfolio. Juniper OpenContrial /Contrail No Both OpenContrail is opensource, and Contrail itself is a commercial product. Juniper Contrail Networking is an open SDN solution that consists of Contrail controller, Contrail vRouter, an analytics engine, and published northbound APIs for cloud and NFV. OpenContrail is also available for free from Juniper. Contrail promotes and use MPLS in datacenter. NEC Programmable Flow No Commercial NEC provides its own SDN controller and switches. NEC SDN platform is one of choices of enterprises and has lots of traction and some case studies.   Avaya SDN Fx controller Yes Commercial Based on OpenDaylight, bundled as a solution package.   Big Cloud Fabric No Commercial BigSwitch networks solution is based on Floodlight opensource project. BigCloud Fabric is a robust, clean SDN controller and works with bare metal whitebox switches. BigCloud Fabric includes SwitchLightOS which is a switch operating system can be loaded on whitebox switches with Broadcom Trident 2 or Tomahawk silicons. The benefit of BigCloud Fabric is that you are not bound to any hardware and you can use baremetal switches from any vendor.   Ciena's Agility Yes Commercial Ciena's Agility multilayer WAN controller is built atop the open-source baseline of the OpenDaylight Project—an open, modular framework created by a vendor-neutral ecosystem (rather than a vendor-centric ego-system) that will enable network operators to source network services and applications from both Ciena's Agility and others. HP VAN (Virtual Application Network) No Commercial The building block of the HP open SDN ecosystem, the controller allows third-party developers to deliver innovative SDN solutions. Huawei Agile controller Yes and No (based on editions) Commercial Huawei's SDN controller which integrates as a solution with Huawei enterprise switches Nuage No Commercial Nuage Networks VSP provides SDN capabilities for clouds of all sizes. It is implemented as a non-disruptive overlay for all existing virtualized and non-virtualized server and network resources. Pluribus Netvisor No Commercial Netvisor Premium and Open Netvisor Linux are distributed network operating systems. Open Netvisor integrates atraditional, interoperable networking stack (L2/L3/VXLAN) with an SDN distributed controller that runs in everyswitch of the fabric. VMware NSX No Commercial VMware NSX is an Overlay type of SDN, which currently works with VMware vSphere. The plan is to support OpenStack in future. VMware NSX also has built-in firewall, router and L4 load balancers allowing micro segmentation. OpenDaylight as an SDN controller Previously, we went through the role of SDN controller, and a brief history of ODL.ODL is a modular open SDN platform which allows developers to build any network or business application on top of it to drive the network in the way they want. Currently OpenDaylight has reached to its fifth release (Boron, which is the fifth element in periodic table). ODL releases are named based on periodic table elements, started from first release the Hydrogen. ODL has a 6 month release period, with many developers working on expanding the ODL, 2 releases per year is expected from community. For technical readers to understand it more clearly, the following diagram will help: ODL platform has a broad set of use cases for multivendor, brown field, green fields, service providers and enterprises. ODL is a foundation for networks of the future. Service providers are using ODL to migrate their services to a software enabled level with automatic service delivery and coming out of circuit-based mindset of service delivery. Also they work on providing a virtualized CPE with NFV support in order to provide flexible offerings. Enterprises use ODL for many use cases, from datacenter networking, Cloud and NFS, network automation and resource optimization, visibility, control to deploying a fully SDN campus network. ODL uses a MD-SAL which makes it very scalable and lets it incorporate new applications and protocols faster. ODL supports multiple standard and proprietary southbound protocols, for example with full support of OpenFlow and OVSDB, ODL can communicate with any standard hardware (or even the virtual switches such as Open vSwitch(OVS) supporting such protocols). With such support, ODL can be deployed and used in multivendor environments and control hardware from different vendors from a single console no matter what vendor and what device it is, as long as they support standard southbound protocols. ODL uses a micro service architecture model which allows users to control applications, protocols and plugins while deploying SDN applications. Also ODL is able to manage the connection between external consumers and providers. The followingdiagram explains the ODL footprint and different components and projects within the ODL: Micro servicesarchitecture ODL stores its YANG data structure in a common data store and uses messaging infrastructure between different components to enable a model-driven approach to describe the network and functions. In ODL MD-SAL, any SDN application can be integrated as a service and then loaded into the SDN controller. These services (apps) can be chained together in any number and ways to match the application needs. This concept allows users to only install and enable the protocols and services they need which makes the system light and efficient. Also services and applications created by users can be shared among others in the ecosystem since the SDN application deployment for ODL follows a modular design. ODL supports multiple southbound protocols. OpenFlow and OpenFlow extension such as Table Type Patterns (TTP), as well as other protocols including NETCONF, BGP/PCEP, CAPWAP and OVSDB. Also ODL supports Cisco OpFlex protocol: ODL platform provides a framework for authentication, authorization and accounting (AAA), as well as automatic discovery and securing of network devices and controllers. Another key area in security is to use encrypted and authenticated communication trough southbound protocols with switches and routers within the SDN domain. Most of southbound protocols support security encryption mechanisms. Summary In this article we learned about SDN, and why it is important. We reviewed the SDN controller products, the ODL history as well as core features of SDN controllers and market leader controllers. We managed to dive in some details about SDN . Resources for Article: Further resources on this subject: Managing Network Devices [article] Setting Up a Network Backup Server with Bacula [article] Point-to-Point Networks [article]
Read more
  • 0
  • 0
  • 28980

article-image-microsoft-azure-stack-architecture
Packt
07 Jun 2017
13 min read
Save for later

The Microsoft Azure Stack Architecture

Packt
07 Jun 2017
13 min read
In this article by Markus Klein and Susan Roesner, authors of the book Azure Stack for Datacenters, we will help you to plan, build, run, and develop your own Azure-based datacenter running Azure Stack technology. The goal is that the technology in your datacenter will be a 100 percent consistent using Azure, which provides flexibility and elasticity to your IT infrastructure. We will learn about: Cloud basics The Microsoft Azure Stack Core management services Using Azure Stack Migrating services to Azure Stack (For more resources related to this topic, see here.) Cloud as the new IT infrastructure Regarding the technical requirements of today's IT, the cloud is always a part of the general IT strategy. It does not depend upon the region in which the company is working in, nor does it depend upon the part of the economy—99.9 percent of all companies have cloud technology already in their environment. The good question for a lot of CIOs is general: "To what extent do we allow cloud services, and what does that mean to our infrastructure?" So it's a matter of compliance, allowance, and willingness. The top 10most important questions for a CIO to prepare for the cloud are as follows: Are we allowed to save our data in the cloud? What classification of data can be saved in the cloud? How flexible are we regarding the cloud? Do we have the knowledge to work with cloud technology? How does our current IT setup and infrastructure fit into the cloud's requirements? Is our current infrastructure already prepared for the cloud? Are we already working with a cloud-ready infrastructure? Is our Internet bandwidth good enough? What does the cloud mean to my employees? Which technology should we choose? Cloud Terminology The definition of the term "cloud" is not simple, but we need to differentiate between the following: Private cloud: This is a highly dynamic IT infrastructure based on virtualization technology that is flexible and scalable. The resources are saved in a privately owned datacenter either in your company or a service provider of your choice. Public cloud: This is a shared offering of IT infrastructure services that are provided via the Internet. Hybrid cloud: This is a mixture of a private and public cloud. Depending on compliance or other security regulations, the services that could be run in a public datacenter are already deployed there, but the services that need to be stored inside the company are running there. The goal is to run these services on the same technology to provide the agility, flexibility, and scalability to move services between public and private datacenters. In general, there are some big players within the cloud market (for example, Amazon Web Services, Google, Azure, and even Alibaba). If a company is quite Microsoft-minded from the infrastructure point of view, they should have a look at Microsoft Azure datacenters. Microsoft started in 2008 with their first datacenter, and today, they invest a billion dollars every month in Azure. As of today, there are about 34 official datacenters around the world that form Microsoft Azure, besides some that Microsoft does not talk about (for example, USGovernment Azure). There are some dedicated datacenters, such as the German Azure cloud, that do not have connectivity to Azure worldwide. Due to compliance requirements, these frontiers need to exist, but the technology of each Azure datacenter is the same although the services offered may vary. The following map gives an overview of the locations (so-called regions) in Azure as of today and provide an idea of which ones will be coming soon: The Microsoft cloud story When Microsoft started their public cloud, they decided that there must be a private cloud stack too, especially, to prepare their infrastructure to run in Azure sometime in the future. The first private cloud solution was the System Center suite, with System Center Orchestrator and Service Provider Foundation (SPF) and Service Manager as the self-service portal solution. Later on, Microsoft launched Windows Azure Pack for Windows Server. Today, Windows Azure Pack is available as a product focused on the private cloud and provides a self-service portal (the well-known old Azure Portal, code name red dog frontend), and it uses the System Center suite as its underlying technology: Microsoft Azure Stack In May2015, Microsoft formally announced a new solution that brings Azure to your datacenter. This solution was named Microsoft Azure Stack. To put it in one sentence: Azure Stack is the same technology with the same APIs and portal as Public Azure, but you could run it in your datacenter or in that of your service provider. With Azure Stack, System Center is completely gone because everything is the way it is in Azure now, and in Azure, there is no System Center at all. This is what the primary focus of this article is. The following diagram gives a current overview of the technical design of Azure Stack compared with Azure: The one and only difference between Azure Stack and Azure is the cloud infrastructure. In Azure, there are thousands of servers that are part of the solution; with Azure Stack, the number is slightly smaller. That's why there is the cloud-inspired infrastructure based on Windows Server, Hyper-V, and Azure technologies as the underlying technology stack. There is no System Center product in this stack anymore. This does not mean that it cannot be there (for example, SCOM for on-premise monitoring), but Azure Stack itself provides all functionality with the solution itself. For stability and functionality, Microsoft decided to provide Azure Stack as a so-called integrated system, so it will come to your door with the hardware stack included. The customer buys Azure Stack as a complete technology stack. At general availability(GA), the hardware OEMs are HPE, DellEMC, and Lenovo. In addition to this, there will be a one-host PoC deployment available for download that could be run as a proof of concept solution on every type of hardware, as soon as it meets the hardware requirements. Technical design Looking at the technical design a bit more in depth, there are some components that we need to dive deeper into: The general basis of Azure Stack is Windows Server 2016 technology, which builds the cloud-inspired infrastructure: Storage Spaces Direct (S2D) VXLAN Nano Server Azure Resource Manager (ARM) Storage Spaces Direct (S2D) Storage Spaces and Scale-Out File Server were technologies that came with Windows Server 2012. The lack of stability in the initial versions and the issues with the underlying hardware was a bad phase. The general concept was a shared storage setup using JBODs controlled from Windows Server 2012 Storage Spaces servers and a magic Scale-Out File Server cluster that acted as the single point of contact for storage: With Windows Server 2016, the design is quite different and the concept relies on a shared-nothing model, even with local attached storage: This is the storage design Azure Stack is coming up with as one of its main pillars. VXLANnetworking technology With Windows Server 2012, Microsoft introduced Software-Defined Networking(SDN)and the NVGRE technology. Hyper-V Network Virtualization supports Network Virtualization using Generic Routing Encapsulation (NVGRE) as the mechanism to virtualize IP addresses. In NVGRE, the virtual machine's packet is encapsulated inside another packet: Hyper-V Network Virtualization supports NVGRE as the mechanism to virtualize IP addresses. In NVGRE, the virtual machine's packet is encapsulated inside another packet. VXLAN comes as the new SDNv2protocol, is RFC compliant, and is supported by most network hardware vendors by default. The Virtual eXtensible Local Area Network (VXLAN) RFC 7348 protocol has been widely adopted in the marketplace, with support from vendors such as Cisco, Brocade, Arista, Dell, and HP. The VXLAN protocol uses UDP as the transport: Nano Server Nano Server offers a minimal-footprint headless version of Windows Server 2016. It completely excludes the graphical user interface, which means that it is quite small, headless, and easy to handle regarding updates and security fixes, but it doesn't provide the GUI expected by customers of Windows Server. Azure Resource Manager (ARM) The “magical” Azure Resource Manager is a 1-1 bit share with ARM from Azure, so it has the same update frequency and features that are available in Azure, too. ARM is a consistent management layer that saves resources, dependencies, inputs, and outputs as an idempotent deployment as a JSON file called an ARM template. This template defines the tastes of a deployment, whether it be VMs, databases, websites, or anything else. The goal is that once a template is designed, it can be run on each Azure-based cloud platform, including Azure Stack. ARM provides cloud consistency with the finest granularity, and the only difference between the clouds is the region the template is being deployed to and the corresponding REST endpoints. ARM not only provides a template for a logical combination of resources within Azure, it manages subscriptions and role-based access control(RBAC) and defines the gallery, metric, and usage data, too. This means quite simply that everything that needs to be done with Azure resources should be done with ARM. Not only does Azure Resource Manager design one virtual machine, it is responsible for setting up one to a bunch of resources that fit together for a specific service. Even ARM templates can be nested; this means they can depend on each other. When working with ARM, you should know the following vocabulary: Resource: Are source is a manageable item available in Azure Resource group: A resource group is the container of resources that fit together within a service Resource provider: A resource provider is a service that can be consumed within Azure Resource manager template: A resource manager template is the definition of a specific service Declarative syntax: Declarative syntax means that the template does not define the way to set up a resource; it just defines how the result and the resource itself has the feature to set up and configure itself to fulfill the syntax to create your own ARM templates, you need to fulfill the following minimum requirements: A test editor of your choice Visual Studio Community Edition Azure SDK Visual Studio Community Edition is available for free from the Internet. After setting these things up, you could start it and define your own templates: Setting up a simple blank template looks like this: There are different ways to get a template so that you can work on and modify it to fit your needs: Visual Studio templates Quick-start templates on GitHub Azure ARM templates You could export the ARM template directly from Azure Portal if the resource has been deployed: After clicking on View template, the following opens up: For further reading on ARM basics, the Getting started with Azure Resource Managerdocument is a good place to begin: http://aka.ms/GettingStartedWithARM. PowerShell Desired State Configuration We talked about ARM and ARM templates that define resources, but they are unable to design the waya VM looks inside, specify which software needs to be installed, and how the deployment should be done. This is why we need to have a look at VMextensions.VMextensions define what should be done after ARM deployment has finished. In general, the extension could be anything that's a script. The best practice is to use PowerShell and its add-on called Desired State Configuration (DSC). DSC defines—quite similarly to ARM—how the software needs to be installed and configured. The great concept is that it also monitors whether the desired state of a virtual machine is changing (for example, because an administrator uninstalls or reconfigures a machine). If it does, it makes sure within minutes whether the original state will be fulfilled again and rolls back the actions to the desired state: Migrating services to Azure Stack If you are running virtual machines today, you're already using a cloud-based technology, although we do not call it cloud today. Basically, this is the idea of a private cloud. If you are running Azure Pack today, you are quite near Azure Stack from the processes point of view but not the technology part. There is a solution called connectors for Azure Pack that lets you have one portal UI for both cloud solutions. This means that the customer can manage everything out of the Azure Stack Portal, although services run in Azure Pack as a legacy solution. Basically, there is no real migration path within Azure Stack. But the way to solve this is quite easy, because you could use every tool that you can use to migrate services to Azure. Azure website migration assistant The Azure website migration assistant will provide a high-level readiness assessment for existing websites. This report outlines sites that are ready to move and elements that may need changes, and it highlights unsupported features. If everything is prepared properly, the tool creates any website and associated database automatically and synchronizes the content. You can learn more about it at https://azure.microsoft.com/en-us/downloads/migration-assistant/: For virtual machines, there are two tools available: Virtual Machines Readiness Assessment Virtual Machines Optimization Assessment Virtual Machines Readiness Assessment The Virtual Machines Readiness Assessment tool will automatically inspect your environment and provide you with a checklist and detailed report on steps for migrating the environment to the cloud. The download location is https://azure.microsoft.com/en-us/downloads/vm-readiness-assessment/. If you run the tool, you will get an output like this: Virtual Machines Optimization Assessment The Virtual Machine Optimization Assessment tool will at first start with a questionnaire and ask several questions about your deployment. Then, it will create an automated data collection and analysis of your Azure VMs. It generates a custom report with tenprioritized recommendations across six focus areas. These areas are security and compliance, performance and scalability, and availability and business continuity. The download location ishttps://azure.microsoft.com/en-us/downloads/vm-optimization-assessment/. Summary Azure Stack provides a real Azure experience in your datacenter. The UI, administrative tools, and even third-party solutions should work properly. The design of Azure Stack is a very small instance of Azure with some technical design modifications, especially regarding the compute, storage, and network resource providers. These modifications give you a means to start small, think big, and deploy large when migrating services directly to public Azure sometime in the future, if needed. The most important tool for planning, describing, defining, and deploying Azure Stack services is Azure Resource Manager, just like in Azure. This provides you a way to create your services just once but deploy them many times. From the business perspective, this means you have better TCO and lower administrative costs. Resources for Article: Further resources on this subject: Deploying and Synchronizing Azure Active Directory [article] What is Azure API Management? [article] Installing and Configuring Windows Azure Pack [article]
Read more
  • 0
  • 0
  • 29512

article-image-web-application-information-gathering
Packt
05 Jun 2017
4 min read
Save for later

Web Application Information Gathering

Packt
05 Jun 2017
4 min read
In this article by Ishan Girdhar, author of the book, Kali Linux Intrusion and Exploitation Cookbook, we will cover the following recipes: Setup API keys for the recon-ng framework Use recon-ng for reconnaissance (For more resources related to this topic, see here.) Setting up API keys for recon-ng framework In this recipe, we will see how we need to set up API keys before we start using recon-ng. Recon-ng is one of the most powerful information gathering tools, if used appropriately, it can help pentesters locating good amount of information from public sources. With the latest version available, recon-ng provides the flexibility to set it up as your own app/client in various social networking websites. Getting ready For this recipe, you require an Internet connection and web browser. How to do it... To set up recon-ng API keys, open the terminal and launch recon-ng and type the commands shown in the following screenshot: Next, type keys list as shown in the following screenshot: Let's start by adding twitter_API & twitter_secret. Log in to Twitter, go to https://apps.twitter.com/, and create a new application as shown in the following screenshot: Click on Create Application once the application is created, navigate to Keys & Access tokens tabs, and copy the secret key and API key as shown in the following screenshot: Copy the API key and reopen the terminal window again run the following command to add the key: Keys add twitter_api <your-copied-api-key> Now, enter the following command to enter the twitter_secret name in recon-ng: keys add twitter_secret <you_twitter_secret> Once you added the keys, you can see the keys added in the recon-ng tool by entering the following command: keys list How it works... In this recipe, you learned how to add API keys to the recon-ng tool. To demonstrate the same, we have created a Twitter application and used Twitter_API and Twitter_Secret and added them to the recon-ng tool. The result is as shown in the following screenshot: Similarly, you will need to include all the API keys here in the recon-ng if you want to gather information from these sources. In next recipe, you will learn how to use recon-ng for information gathering. Use recon-ng for reconnaissance In this recipe, you will learn to use recon-ng for reconnaissance. Recon-ng is a full-featured Web Reconnaissance framework written in Python. Complete with independent modules, database interaction, built-in convenience functions, interactive help, and command completion, Recon-ng provides a powerful environment in which open source web-based reconnaissance can be conducted quickly and thoroughly. Getting ready To install Kali Linux, you will require an Internet connection. How to do it... Open a terminal and start the recon-ng framework, as shown in the following screenshot: Recon-ng has the look and feel like that of Metasploit. To see all the available modules, enter the following command: show modules Recon-ng will list all available modules, as shown in the following screenshot: Let's go ahead and use our first module for information gathering. Enter the following command: use recon/domains-vulnerabilities/punkspider Now, enter the commands shown in the following screenshot: As you can see, there are some vulnerabilities discovered and are available publically. Let's use another module that fetches any known and reported vulnerabilities from xssed.com. The XSSed project was created in early February 2007 by KF and DP. It provides information on all things related to cross-site scripting vulnerabilities and is the largest online archive of XSS vulnerable websites. It's a good repository of XSS to gather information. To begin with, enter the following command: Show module use recon/domains-vulnerabilities/xssed Show Options Set source Microsoft.com Show Options RUN You will see the following output: As you can see, recon-ng has aggregated the publically available vulnerabilities from XSSed, as shown in the following screenshot: Similarly, you can keep using the different modules until and unless you get your required information regarding your target. Summary In this article, you learned how to add API keys to the recon-ng tool. To demonstrate the same, we have created a Twitter application and used Twitter_API and Twitter_Secret and added them to the recon-ng tool. You also learned how to use recon-ng for reconnaissance. Resources for Article: Further resources on this subject: Getting Started with Metasploitable2 and Kali Linux [article] Wireless Attacks in Kali Linux [article] What is Kali Linux [article]
Read more
  • 0
  • 0
  • 24882

article-image-booting-android-system-using-pxenfs
Packt
29 May 2017
31 min read
Save for later

Booting Up Android System Using PXE/NFS

Packt
29 May 2017
31 min read
In this article by Roger Ye, the author of the book Android System Programming, introduces two challenges present in most of the embedded Linux system programming that you need to resolve before you can boot up your system. These two challenges are: How to load your kernel and ramdisk image? Where do you store your filesystem? (For more resources related to this topic, see here.) This is true for Android systems as well. After you got a development board, you have to build the bootloader first and flash it to the storage on the board before you can move to the next step. After that, you have to build the kernel, ramdisk, and filesystem. You have to repeat this tedious build, flash, and test process again and again. In this process, you need to use special tools to flash various images for the development board. Many embedded system developers want to get rid of the process of flashing images so that they can concentrate on the development work itself. Usually, they use two techniques PXE boot and NFS filesystem. If you search "Android NFS" on the Internet, you can find many articles or discussions about this topic. I don't have a development board on my hand, so I will use VirtualBox as a virtual hardware board to demonstrate how to boot a system using PXE bootloader and NFS as filesystem. To repeat the same process in this article, you need to have the following hardware and software environment. A computer running Ubuntu 14.04 as the host environment VirtualBox Version 5.1.2 or above A virtual machine running Android x86vbox A virtual machine running Ubuntu 14.04 as PXE server (optional) Android x86vbox is a ROM that I developed in the book Android System Programming. You can download the ROM image at the following URL: https://sourceforge.net/projects/android-system-programming/files/android-7/ch14/ch14.zip/download After you download the preceding ZIP file, you can find a list of files here: initrd.img: This is the modified ramdisk image from open source project android-x86 kernel: NFS-enabled Android kernel for device x86vbox ramdisk.img: ramdisk for the Android boot ramdisk-recovery.img: ramdisk for the recovery boot update-android-7.1.1_r4_x86vbox_ch14_r1.zip: OTA update image of x86vbox, you can install this image using recovery Setting up a PXE boot environment What is PXE? PXE means Preboot eXecution Environment. Before we can boot a Linux environment, what we need is to find a way to load kernel and ramdisk to the system memory. This is one of the major tasks performed by most of Linux bootloader. The bootloader usually fetches kernel and ramdisk from a kind of storage device, such as flash storage, harddisk, or USB. It can also be retrieved from a network connection. PXE is a method that we can boot a device with LAN connection and a PXE-capable network interface controller (NIC). As shown in the following diagram, PXE uses DHCP and TFTP protocols to complete the boot process. In a simplest environment, a PXE server is setup as both DHCP and TFTP server. The client NIC obtains the IP address from DHCP server and uses TFTP protocol to get the kernel and ramdisk images to start the boot process. I will demonstrate how to prepare a PXE-capable ROM for VirtualBox virtio network adapter so we can use this ROM to boot the system via PXE. You will also learn how to set up a PXE server which is the key element in the PXE boot setup. In VirtualBox, it also includes a built-in PXE server. We will explore this option as well. Preparing PXE Boot ROM Even though PXE boot is supported by VirtualBox, but the setup is not consistent on different host platforms. You may get error message like PXE-E3C - TFTP Error - Access Violation during the boot. This is because the PXE boot depends on LAN boot ROM. When you choose different network adapters, you may get different test results. To get a consistent test result, you can use the LAN boot ROM from Etherboot/gPXE project. gPXE is an open source (GPL) network bootloader. It provides a direct replacement for proprietary PXE ROMs, with many extra features such as DNS, HTTP, iSCSI, and so on. There is a page at gPXE project website about how to set up LAN boot ROM for VirtualBox: http://www.etherboot.org/wiki/romburning/vbox The following table is a list of network adapters supported by VirtualBox. VirtualBox adapters PCI vendor ID PCI device ID Mfr name Device name Am79C970A 1022h 2000h AMD PCnet-PCI II (AM79C970A) Am79C973 1022h 2000h AMD PCnet-PCI III (AM79C973) 82540EM 8086h 100Eh Intel Intel PRO/1000 MT Desktop (82540EM) 82543GC 8086h 1004h Intel Intel PRO/1000 T Server (82543GC) 82545EM 8086h 100Fh Intel Intel PRO/1000 MT Server (82545EM) virtio 1AF4h 1000h   Paravirtualized Network (virtio-net) Since paravirtualized network has better performance in most of the situation, we will explore how to support PXE boot using virtio-net network adapter. Downloading and building the LAN boot ROM There may be LAN boot ROM binary image available on the Internet, but it is not provided at gPXE project. We have to build from source code according to the instructions from gPXE project website. Let's download and build the source code using the following commands. $ git clone git://git.etherboot.org/scm/gpxe.git $ cd gpxe/src $ make bin/1af41000.rom # for virtio 1af4:1000 Fixing up the ROM image Before the ROM image can be used, the ROM image has to be updated due to VirtualBox have the following requirements on ROM image size: Size must be 4K aligned (that is, a multiple of 4096) Size must not be greater than 64K Let's check the image size first and make sure it is not larger than 65536 bytes (64K): $ ls -l bin/1af41000.rom | awk '{print $5}' 62464 We can see that it is less than 64K. Now, we have to pad the image file to a 4K boundary. We can do this using the following commands. $ python >>> 65536 - 62464 # Calculate padding size 3072 >>> f = open('bin/1af41000.rom', 'a') >>> f.write(' ' * 3072) # Pad with zeroes We can check the image file size again. $ ls -l 1af41000.rom | awk '{print $5}' 65536 As we can see, the size is 64K now. Configuring the virtual machine to use the LAN boot ROM To use this LAN boot ROM, we can use command VBoxManage to update VirtualBox settings. We use the following command to set the LanBootRom path: $ VBoxManage setextradata $VM_NAME VBoxInternal/Devices/pcbios/0/Config/LanBootRom /path/to/1af41000.rom Replace $VM_NAME with your VM's name. If you use global as $VM_NAME then all VMs will use the gPXE LAN boot ROM. To remove the above configuration, you just have to reset the path value as below. $ VBoxManage setextradata $VM_NAME VBoxInternal/Devices/pcbios/0/Config/LanBootRom You can also check the current configuration using the below command: $ VBoxManage getextradata $VM_NAME VBoxInternal/Devices/pcbios/0/Config/LanBootRom Value: /path/to/1af41000.rom If you don't want to build LAN boot ROM yourself, you can use the one that I posted at: https://sourceforge.net/projects/android-system-programming/files/android-7/ch14/1af41000.rom/download Setting up PXE boot environment With a proper PXE ROM installed, we can set up the PXE on the host now. Before we setup a PXE server, we need to think about the network connections. There are three ways a virtual machine in VirtualBox can connect to the network. Bridged network: Connect to the same physical network as the host. It looks like the virtual machine connects to the same LAN connection as the host. Host only network: Connect to a virtual network, which is only visible by the virtual machine and the host. In this configuration, the virtual machine cannot connect to outside network, such as Internet. NAT network: Connect to the host network through NAT. This is the most common choice. In this configuration, the virtual machine can access to the external network, but the external network cannot connect to the virtual machine directly. For an example, if you set up a FTP service on the virtual machine, the computers on the LAN of the host cannot access this FTP service. If you want to publish this service, you have to use port forwarding setting to do this. With these concepts in mind, if you want to use a dedicated machine as the PXE server, you can use bridged network in your environment. However, you must be very careful using this kind of setup. This is usually done by the IT group in your organization, since you cannot setup a DHCP server on the LAN without affecting others. We won't use this option here. The host only network is actually a good choice for this case, because this kind of network is an isolated network configuration. The network connection only exists between the host and the virtual machine. The problem is we cannot access to the outside network. We will configure two network interfaces to our virtual machine instance one host only network for the PXE boot and one NAT network to access Internet. We will see this configuration later. In VirtualBox, it also has a built-in PXE server in NAT network. With this option, we don't need to setup PXE server by ourselves. We will explain how to set up our own PXE boot environment first and then explain how to use the built-in PXE server of VirtualBox. As we can see in the following figure, we have two virtual machines pxeAndroid and PXE Server in our setup. The upper part PXE Server is optional. If we use the built-in PXE server, both PXE server and NFS server will be on the development host. Let's look at how to set up our own PXE server first. To setup a PXE boot environment, we need to install a TFTP and DHCP server. I assume that you can set up a Linux virtual machine by yourself. I will use Ubuntu as an example here. In your environment, you have to create two virtual machines. A PXE server with a host only network interface A virtual machine to boot Android with a host only network interface and a NAT network interface Setting up TFTP server We can install tftp server in the PXE server using the following command: $ sudo apt-get install tftpd-hpa After the tftp server is installed, we need to set up PXE boot configuration in the folder /var/lib/tftpboot. We can use the following command to start tftp server. $ sudo service tftpd-hpa restart Configuring DHCP server Once tftp server is installed, we need to install a DHCP server. We can install DHCP server using the following command. $ sudo apt-get install isc-dhcp-server After install the DHCP server, we have to add the following lines into the DHCP server configuration file at /etc/dhcp/dhcpd.conf. subnet 192.168.56.0 netmask 255.255.255.0 { range 192.168.56.10 192.168.56.99; filename "pxelinux.0"; } We use the IP address range 192.168.56.x for the host only subnet, since this is the default range after we create a host-only network in VirtualBox. There may be more than one host only network configured in your VirtualBox environment. You may want to check the right host only network configuration that you want to use and set the above configuration file according to the host only network setup. Configuring and testing the PXE boot After we set up the PXE server, we can create a virtual machine instance to test the environment. We will demonstrate this using Ubuntu 14.04 as the host environment. The same setup can be duplicated to Windows or OS X environment as well. If you use a Windows environment, you have to set up the NFS server inside the PXE server. The Windows host cannot support NFS. Setting up the Android Virtual Machine Let's create a virtual machine called pxeAndroid in VirtualBox first. After start VirtualBox, we can click on  the button New to create a new virtual machine as shown in the following screenshot: We call it pxeAndroid and choose Linux as the type of virtual machine. We can just follow the wizard to create this virtual machine with suitable configuration. After the virtual machine is created, we need to make a few changes to the settings. The first one need to be changed is the network configuration as I mentioned before we need both NAT and host Only connections. We can click on the name of virtual machine pxeAndroid first and then click on the button Settings to change the settings. Select the option Network in the left-hand side, as we can see from the following screen: We select the Adapter 1 and it is default to NAT network. We need to change the Adapter Type to Paravirtualized Network (virtio-net) since we will use the PXE ROM that we just built. The NAT network can connect to the outside network. It supports port forwarding so that we can access certain services in the virtual machine. The one that we need to set up here is the ADB service. We need to use ADB to debug the pxeAndroid device later. We can set up the port forwarding for ADB as follows: Now, we can select Adapter 2 to set up a host-only network as the following figure: We choose the adapter as Host-only Adapter and Adapter Type as Paravirtualized Network (virtio-net). Next, we can click on the System option to set the boot order so the default boot order is to boot from the network interface as the following figure: Configuring pxelinux.cfg Before we can test the virtual machine we just setup, we need to specify in the configuration file to let the PXE boot to know where to find the kernel and ramdisk images. The PXE boot process is something like this. When the virtual machine pxeAndroid power on, the client will get the IP address through DHCP. After the DHCP configuration is found, the configuration includes the standard information such as IP address, subnet mask, gateway, DNS, and so on. In addition, it also provides the location of TFTP server and the filename of a boot image. The name of boot image is usually called pxelinux.0 as we can see in the previous section when we set up DHCP server. The name of boot image is vmname.pxe for the built-in PXE boot environment. Where the vmname should be the name of virtual machine. For example, it is pxeAndroid.pxe for our virtual machine. The client contacts TFTP server to obtain the boot image. TFTP server sends the boot image (pxelinux.0 or vmname.pxe), and the client executes it. By default, the boot image searches the pxelinux.cfg directory on TFTP server for boot configuration files. The client downloads all the files it needs (kernel, ramdisk, or root filesystem) and then loads them. The target machine pxeAndroid reboots. In the above step 5, the boot image searches the boot configuration files in the following steps: First, it searches for the boot configuration file that is named according to the MAC address represented in lower case hexadecimal digits with dash separators. For example, for the MAC address 08:00:27:90:99:7B, it searches for the file 08-00-27-90-99-7b. Then, it searches for the configuration file using the IP address (of the machine that is being booted) in upper case hexadecimal digits. For example, for the IP address 192.168.56.100, it searches for the file C0A83864. If that file is not found, it removes one hexadecimal digit from the end and tries again. However, if the search is still not successful, it finally looks for a file named default (in lower case). For example, if the boot filename is /var/lib/tftpboot/pxelinux.0, the Ethernet MAC address is 08:00:27:90:99:7B, and the IP address is 192.168.56.100, the boot image looks for file names in the following order: /var/lib/tftpboot/pxelinux.cfg/08-00-27-90-99-7b /var/lib/tftpboot/pxelinux.cfg/C0A83864 /var/lib/tftpboot/pxelinux.cfg/C0A8386 /var/lib/tftpboot/pxelinux.cfg/C0A838 /var/lib/tftpboot/pxelinux.cfg/C0A83 /var/lib/tftpboot/pxelinux.cfg/C0A8 /var/lib/tftpboot/pxelinux.cfg/C0A /var/lib/tftpboot/pxelinux.cfg/C0 /var/lib/tftpboot/pxelinux.cfg/C /var/lib/tftpboot/pxelinux.cfg/default The boot image pxelinux.0 is part of an open source project syslinux. We can get the boot image and the menu user interface from Syslinux project using the following commands: $ sudo apt-get install syslinux After Syslinux is installed, pxelinux.0 can be copied to the TFTP root folder as . $ cp /usr/lib/syslinux/pxelinux.0 /var/lib/tftpboot/pxelinux.0 To have a better user interface, we can copy menu.c32 to the TFTP folder as well. $ cp /usr/lib/syslinux/menu.c32 /var/lib/tftpboot/menu.c32 pxelinux.cfg/default Now, we will look at how to configure the boot configuration file pxelinux.cfg/default. In our setup, it looks like the following code snippet: prompt 1 default menu.c32 timeout 100 label 1. NFS Installation (serial port) - x86vbox menu x86vbox_install_serial kernel x86vbox/kernel append ip=dhcp console=ttyS3,115200 initrd=x86vbox/initrd.img root=/dev/nfs rw androidboot.hardware=x86vbox INSTALL=1 DEBUG=2 SRC=/x86vbox ROOT=192.168.56.1:/home/sgye/vol1/android-6/out/target/product qemu=1 qemu.gles=0 label 2. x86vbox (ROOT=/dev/sda1, serial port) menu x86vbox_sda1 kernel x86vbox/kernel append ip=dhcp console=ttyS3,115200 initrd=x86vbox/initrd.img androidboot.hardware=x86vbox DEBUG=2 SRC=/android-x86vbox ROOT=/dev/sda1 ... The syntax in the boot configuration file can be found at the following URL from Syslinux project: http://www.syslinux.org/wiki/index.php?title=SYSLINUX In the mentioned configuration file that we use, we can see the following commands and options: prompt: It will let the bootloader know whether it will show a LILO-style "boot:" prompt. With this command line prompt, you can input the option directly. All the boot options define by the command label. default: It defines the default boot option. timeout: If more than one label entry is available, this directive indicates how long to pause at the boot: prompt until booting automatically, in units of 1/10 s. The timeout is cancelled when any key is pressed, the assumption being the user will complete the command line. A timeout of zero will disable the timeout completely. The default is 0. label: A human-readable string that describes a kernel and options. The default label is linux, but you can change this with the DEFAULT keyword. kernel: The kernel file that the boot image will boot. append: The kernel command line, which can be passed to the kernel during the boot. In this configuration file, we show two boot options. In the first option, we can boot to a minimum Linux environment using NFS root filesystem. We can install the x86vbox images from that environment to hard disk. The source location of installation is your AOSP build output folder. In the second option, we can boot x86vbox from disk partition /dev/sda1. After the x86vbox image is installed on the partition /dev/sda1, the Android system can be started using the second option. Using VirtualBox internal PXE booting with NAT VirtualBox provides a built-in support for PXE boot using NAT network. We can also set up PXE boot using this built-in facility. There are a few minor differences between the built-in PXE and the one that we set up in the PXE server. The built-in PXE uses the NAT network connection while the PXE server uses host only network connection. TFTP root is at /var/lib/tftpboot for the normal PXE setup while the built-in TFTP root is at $HOME/.VirtualBox/TFTP on Linux or %USERPROFILE%.VirtualBoxTFTP on Windows. Usually, the default boot image name is pxelinux.0, but it is vmname.pxe for the VirtualBox built-in PXE. For example, if we use pxeAndroid as virtual machine name, we have to make a copy of pxelinux.0 and name it pxeAndroid.pxe under the VirtualBox TFTP root folder. If you choose to use the built-in PXE support, you don't have to create a PXE server by yourself. This is the recommended test environment to simplify the test process. Setting up serial port for debugging The reason that we want to boot Android using PXE and NFS is that we want to use a very simple bootloader and find an easier way to debug the system. In order to see the debug log, we want to redirect the debug output from the video console to a serial port so that we can separate the graphic user interface from the debug output. We need to do two things in order to meet our goals. The Linux kernel debug message can be re-directed to a specific channel using kernel command-line arguments. We specify this in PXE boot configuration with option console=ttyS3,115200. This is defined in pxelinux.cfg/default as follows: label 1. NFS Installation (serial port) - x86vbox menu x86vbox_install_serial kernel x86vbox/kernel append ip=dhcp console=ttyS3,115200 initrd=x86vbox/initrd.img root=/dev/nfs rw androidboot.hardware=x86vbox INSTALL=1 DEBUG=2 SRC=/x86vbox ROOT=192.168.56.1:/home/sgye/vol1/android-6/out/target/product qemu=1 qemu.gles=0 We will explain the details about kernel parameters in the option append later. The next thing is that we need to create a virtual serial port so that we can connect to. We configure this in the virtual machine settings page as shown in the following screen: We use a host pipe to simulate the virtual serial port. We can set the path as something like /tmp/pxeAndroid_p. The mapping between COMx to /dev/ttySx can be found here: /dev/ttyS0 - COM1 /dev/ttyS1 - COM2 /dev/ttyS2 - COM3 /dev/ttyS3 - COM4 To connect to the host pipe, we can use a tool like minicom in Linux or putty in Windows. If you don't have minicom installed, you can install and configure minicom as shown in the host environment: $ sudo apt-get install minicom To setup minicom, we can use the following command: $ sudo minicom -s After minicom start, select Serial port setup, and set Serial Device as unix#/tmp/pxeAndroid_p. Once this is done, select Save setup as dfl and Exit from minicom as shown in the following screenshot. Now, we can connect to the virtual serial port using minicom: After we made all the changes for the configuration, we can power on the virtual machine and test it. We should be able to see the following boot up screen: We can see from the preceding screenshot that the virtual machine loads the file pxelinux.cfg/default and wait on the boot prompt. We are ready to boot from PXE ROM now. Build AOSP images To build the x86vbox images in this article, we can retrieve the source code using the following commands: $ repo init https://github.com/shugaoye/manifests -b android-7.1.1_r4_ch14_aosp $ repo sync After the source code is ready for use, we can set the environment and build the system as shown here: $ . build/envsetup.sh $ lunch x86vbox-eng $ make -j4 To build initrd.img, we can run the following command. $ make initrd USE_SQUASHFS=0 We can also build an OTA update image which can use recovery to install it. $ cd device/generic/x86vbox $ make dist NFS filesystem Since I am discussing about Android system programming, I will assume you know how to build Android images from AOSP source code. In our setup, we will use the output from the AOSP build to boot the Android system in VirtualBox. They are not able to be used by VirtualBox directly. For example, the system.img can be used by emulator, but not VirtualBox. VirtualBox can use the standard virtual disk images in VDI, VHD, or VMDK formats, but not the raw disk image as system.img. In some open source projects, such as the android-x86 project, the output is an installation image, such as ISO or USB disk image formats. With an installation image, it can be burnt to CD ROM or USB drive. Then, we can boot VirtualBox from CD ROM or USB to install the system just like how we install Windows on our PC. It is quite tedious and not efficient to use this method, when we are debugging a system. As a developer, we want a simple and quick way that we can start the debugging immediately after we build the system. The method that we will use here is to boot the system using NFS filesystem. The key point is that we will treat the output folder of AOSP build as the root filesystem directly so that we can boot the system using it without any additional work. If you are an embedded system developer, you may be used this method in your work already. When we work on the initial debugging phase of an embedded Linux system, we often use NFS filesystem as a root filesystem. With this method, we can avoid to flash the images to the flash storage every time after the build. Preparing the kernel To support NFS boot, we need a Linux kernel with NFS filesystem support. The default Linux kernel for Android doesn't have NFS boot support. In order to boot Android and mount NFS directory as root filesystem, we have to re-compile Linux kernel with the following options enabled: CONFIG_IP_PNP=y CONFIG_IP_PNP_DHCP=y CONFIG_IP_PNP_BOOTP=y CONFIG_IP_PNP_RARP=y CONFIG_USB_USBNET=y CONFIG_USB_NET_SMSC95XX=y CONFIG_USB=y CONFIG_USB_SUPPORT=y CONFIG_USB_ARCH_HAS_EHCI=y CONFIG_NETWORK_FILESYSTEMS=y CONFIG_NFS_FS=y CONFIG_NFS_V3=y CONFIG_NFS_V3_ACL=y CONFIG_ROOT_NFS=y The kernel source code used in this article is a modified version by me for the book Android System Programming. You can find the source code at the following URL: https://github.com/shugaoye/goldfish We can get the source code using the following command: $ git clone https://github.com/shugaoye/goldfish -b android-7.1.1_r4_x86vbox_ch14_r We can use menuconfig to change the kernel configuration or copy a configuration file with NFS support. To configure kernel build using menuconfig, we can use the following commands: $ . build/envsetup.sh $ lunch x86vbox-eng $ make -C kernel O=$OUT/obj/kernel ARCH=x86 menuconfig We can also use the configuration file with NFS enable from my GitHub directly. We can observe the difference between this configuration file and the default kernel configuration file from android-x86 project as shown here: $ diff kernel/arch/x86/configs/android-x86_defconfig ~/src/android-x86_nfs_defconfig 216a217 > # CONFIG_SYSTEM_TRUSTED_KEYRING is not set 1083a1085 > CONFIG_DNS_RESOLVER=y 1836c1838 < CONFIG_VIRTIO_NET=m --- > CONFIG_VIRTIO_NET=y 1959c1961 < CONFIG_E1000=m --- > CONFIG_E1000=y 5816a5819 > # CONFIG_ECRYPT_FS is not set 5854,5856c5857,5859 < CONFIG_NFS_FS=m < CONFIG_NFS_V2=m < CONFIG_NFS_V3=m --- > CONFIG_NFS_FS=y > CONFIG_NFS_V2=y > CONFIG_NFS_V3=y 5858c5861 < # CONFIG_NFS_V4 is not set --- > CONFIG_NFS_V4=y 5859a5863,5872 > CONFIG_NFS_V4_1=y > CONFIG_NFS_V4_2=y > CONFIG_PNFS_FILE_LAYOUT=y > CONFIG_PNFS_BLOCK=y > CONFIG_NFS_V4_1_IMPLEMENTATION_ID_DOMAIN="kernel.org" > # CONFIG_NFS_V4_1_MIGRATION is not set > CONFIG_NFS_V4_SECURITY_LABEL=y > CONFIG_ROOT_NFS=y > # CONFIG_NFS_USE_LEGACY_DNS is not set > CONFIG_NFS_USE_KERNEL_DNS=y 5861,5862c5874,5875 < CONFIG_GRACE_PERIOD=m < CONFIG_LOCKD=m --- > CONFIG_GRACE_PERIOD=y > CONFIG_LOCKD=y 5865c5878,5880 < CONFIG_SUNRPC=m --- > CONFIG_SUNRPC=y > CONFIG_SUNRPC_GSS=y > CONFIG_SUNRPC_BACKCHANNEL=y 5870a5886 > # CONFIG_CIFS_UPCALL is not set 5873a5890 > # CONFIG_CIFS_DFS_UPCALL is not set 6132c6149,6153 < # CONFIG_KEYS is not set --- > CONFIG_KEYS=y > # CONFIG_PERSISTENT_KEYRINGS is not set > # CONFIG_BIG_KEYS is not set > # CONFIG_ENCRYPTED_KEYS is not set > # CONFIG_KEYS_DEBUG_PROC_KEYS is not set 6142a6164 > # CONFIG_INTEGRITY_SIGNATURE is not set 6270a6293 > # CONFIG_ASYMMETRIC_KEY_TYPE is not set 6339a6363 > CONFIG_ASSOCIATIVE_ARRAY=y 6352a6377 > CONFIG_OID_REGISTRY=y We can copy this configuration file and use it to build Linux kernel as shown here: $ cp ~/src/android-x86_nfs_defconfig out/target/product/x86/obj/kernel/.config $ . build/envsetup.sh $ lunch x86vbox-eng $ make -C kernel O=$OUT/obj/kernel ARCH=x86 After the build, we can copy the kernel and ramdisk files to the TFTP root at /var/lib/tftpboot/x86vbox or $HOME/.VirtualBox/TFTP/x86vbox. Setting up NFS server After we prepare the Android kernel, we need to setup a NFS server on our development host so that we can mount to the NFS folders exported by our NFS server. We can check whether the NFS server is already installed or not using the following command: $ dpkg -l | grep nfs If the NFS server is not installed, we can install it using the following command: $ sudo apt-get install nfs-kernel-server Once we have a NFS server ready, we need to export our root filesystem through NFS. We will use the AOSP build output folder as we mentioned previously. We can add the following line to the configuration file /etc/exports. $AOSP/out/target/product/ *(rw,sync,insecure,no_subtree_check,async) After that, we execute the following command to export the folder $AOSP/out/target/product. You need to replace $AOSP to the absolute path in your setup. $ sudo exportfs -a Configuring PXE boot menu We can use PXE boot ROM to support the boot path like a real Android device. As we know that Android device can boot to three different modes, they are the bootloader mode, the recovery mode and the normal start up. With PXE boot ROM, we can easily support the same and more. By configuring the file pxelinux.cfg/default, we can allow x86vbox to boot in different paths. We will configure multiple boot paths here. Booting to NFS installation We can boot the system to an installation mode so that we can borrow the installation script from android-x86 project to install x86vbox images to the virtual hard disk. label 1. NFS Installation (serial port) - x86vbox menu x86vbox_install_serial kernel x86vbox/kernel append ip=dhcp console=ttyS3,115200 initrd=x86vbox/initrd.img root=/dev/nfs rw androidboot.hardware=x86vbox INSTALL=1 DEBUG=2 SRC=/x86vbox ROOT=192.168.56.1:$AOSP/out/target/product In this configuration, we use the NFS-capable kernel from TFTP folder, such as $HOME/.VirtualBox/TFTP/x86vbox/kernel. The ramdisk image initrd.img is also stored in the same folder. Both files under TFTP folder can actually be the symbol links to the AOSP output. In this case, we don't have to copy them after the build. We use the following three options to configure the NFS boot. ip=dhcp: Use DHCP to get IP address from DHCP server. The DHCP server can be the built-in DHCP server of VirtualBox or the one that we set up previously. root=/dev/nfs: Use NFS boot. ROOT=10.0.2.2:$AOSP/out/target/product: The root is the AOSP output folder in the development host. If we use the built-in PXE, the IP address 10.0.2.2 is the default host IP address in the NAT network. It could be changed using the VirtualBox configuration. We want to monitor the debug output so we set the console to the virtual serial port that we configured previously as console=ttyS3,115200. We can use a host pipe to connect to it using minicom. We set three kernel parameters using by the android-x86 init script and installation script. INSTALL=1: Tells the init script that we want to install the system. DEBUG=2: This will bring us to the debug console during the boot process. SRC=/x86vbox: This is the directory for the android root filesystem. Finally, the option androidboot.hardware=x86vbox is passed to the Android init process to tell which init script to run. In this case, the device init script init.x86vbox.rc will be executed. In our PXE boot menu, we can add another configuration for the installation without option console=ttyS3,115200. In this case, all debug output will print on the screen which is the default standard output. Booting to hard disk We can have another option as shown to boot the system from hard disk after we install the system using the previous configuration. label 2. x86vbox (ROOT=/dev/sda1, serial port) menu x86vbox_sda1 kernel x86vbox/kernel append ip=dhcp console=ttyS3,115200 initrd=x86vbox/initrd.img androidboot.hardware=x86vbox DEBUG=2 SRC=/android-x86vbox ROOT=/dev/sda1 In the preceding configuration, we use device /dev/sda1 as root and we don't have the option INSTALL=1. With this configuration, the virtual machine will boot to Android system from hard disk /dev/sda1 and the debug output will print to virtual serial port. We can configure another similar configuration which prints the debug output to the screen. Booting to recovery With PXE boot menu, we can configure the system to boot to recovery as well. We can see the following configuration: label 5. x86vbox recovery (ROOT=/dev/sda2) menu x86vbox_recovery kernel x86vbox/kernel append ip=dhcp console=ttyS3,115200 initrd=x86vbox/ramdisk-recovery.img androidboot.hardware=x86vbox DEBUG=2 SRC=/android-x86vbox ROOT=/dev/sda2 Note the difference here is that we use recovery ramdisk instead of initrd.img. Since recovery is a self-contained environment, we can set variable ROOT to another partition as well. We can use recovery to install an OTA update image. With PXE boot, you can explore many different possibilities to play with various boot methods and images. With all this setup, we can boot to PXE boot menu as the following screenshot: We can select an option from the PXE boot menu above to boot to a debug console as shown here: From the preceding debug output, we can see that the virtual machine obtains the IP address 10.0.2.15 from DHCP server 10.0.2.2. The NFS root is found at IP address 192.168.56.1, which is the development host. It uses a different IP address range is because we use two network interfaces in our configuration. We use a NAT network interface which has the IP address range in 10.0.2.x and a host-only network interface which has the IP address range in 192.168.56.x. The IP address 10.0.2.2 is the IP address of the development host in NAT network while IP address 192.168.56.1 is the IP address of the development host in host only network. In this setup, we use the VirtualBox built-in PXE support so both DHCP and TFTP server are on the NAT network interface. If we use a separate PXE server, both DHCP and TFTP server will be on the host only network interface. It is possible to boot the Android system from the directory $OUT/system using NFS filesystem. In that case, we don't need any installation process at all. However, we need to make changes to netd to disable flushing the routing rules. The changes can be done in the following file in the function flushRules: $AOSP/system/netd/server/RouteController.cpp Without this change, the network connection will be reset after the routing rules are flushed. However, we can still use NFS boot to perform the first stage boot or install the system to hard disk. This alternative already makes our development process much efficient. Summary In this article, you learned a debugging method with the combination of PXE boot and NFS root filesystem. This is a common practice in the embedded Linux development world. We try to use the similar setup for the Android system development. As we can see this setup can make the development and debugging process more efficiently. We can use this setup to remove the dependency of bootloader. We can also reduce the time to flash or provision the build images to the device. Even though we did all the exploration in VirtualBox, you can reuse the same method in your hardware board development as well. Resources for Article: Further resources on this subject: Setting up Development Environment for Android Wear Applications [article] Practical How-To Recipes for Android [article] Optimizing Games for Android [article]
Read more
  • 0
  • 1
  • 40188
article-image-top-10-deep-learning-frameworks
Amey Varangaonkar
25 May 2017
9 min read
Save for later

Top 10 deep learning frameworks

Amey Varangaonkar
25 May 2017
9 min read
Deep learning frameworks are powering the artificial intelligence revolution. Without them, it would be almost impossible for data scientists to deliver the level of sophistication in their deep learning algorithms that advances in computing and processing power have made possible. Put simply, deep learning frameworks make it easier to build deep learning algorithms of considerable complexity. This follows a wider trend that you can see in other fields of programming and software engineering; open source communities are continually are developing new tools that simplify difficult tasks and minimize arduous ones. The deep learning framework you choose to use is ultimately down to what you're trying to do and how you work already. But to get you started here is a list of 10 of the best and most popular deep learning frameworks being used today. What are the best deep learning frameworks? Tensorflow One of the most popular Deep Learning libraries out there, Tensorflow, was developed by the Google Brain team and open-sourced in 2015. Positioned as a ‘second-generation machine learning system’, Tensorflow is a Python-based library capable of running on multiple CPUs and GPUs. It is available on all platforms, desktop, and mobile. It also has support for other languages such as C++ and R and can be used directly to create deep learning models, or by using wrapper libraries (for e.g. Keras) on top of it. In November 2017, Tensorflow announced a developer preview for Tensorflow Lite, a lightweight machine learning solution for mobile and embedded devices. The machine learning paradigm is continuously evolving - and the focus is now slowly shifting towards developing machine learning models that run on mobile and portable devices in order to make the applications smarter and more intelligent. Learn how to build a neural network with TensorFlow. If you're just starting out with deep learning, TensorFlow is THE go-to framework. It’s Python-based, backed by Google, has a very good documentation, and there are tons of tutorials and videos available on the internet to guide you. You can check out Packt’s TensorFlow catalog here. Keras Although TensorFlow is a very good deep learning library, creating models using only Tensorflow can be a challenge, as it is a pretty low-level library and can be quite complex to use for a beginner. To tackle this challenge, Keras was built as a simplified interface for building efficient neural networks in just a few lines of code and it can be configured to work on top of TensorFlow. Written in Python, Keras is very lightweight, easy to use, and pretty straightforward to learn. Because of these reasons, Tensorflow has incorporated Keras as part of its core API. Despite being a relatively new library, Keras has a very good documentation in place. If you want to know more about how Keras solves your deep learning problems, this interview by our best-selling author Sujit Pal should help you. Read now: Why you should use Keras for deep learning [box type="success" align="" class="" width=""]If you have some knowledge of Python programming and want to get started with deep learning, this is one library you definitely want to check out![/box] Caffe Built with expression, speed, and modularity in mind, Caffe is one of the first deep learning libraries developed mainly by Berkeley Vision and Learning Center (BVLC). It is a C++ library which also has a Python interface and finds its primary application in modeling Convolutional Neural Networks. One of the major benefits of using this library is that you can get a number of pre-trained networks directly from the Caffe Model Zoo, available for immediate use. If you’re interested in modeling CNNs or solve your image processing problems, you might want to consider this library. Following the footsteps of Caffe, Facebook also recently open-sourced Caffe2, a new light-weight, modular deep learning framework which offers greater flexibility for building high-performance deep learning models. Torch Torch is a Lua-based deep learning framework and has been used and developed by big players such as Facebook, Twitter and Google. It makes use of the C/C++ libraries as well as CUDA for GPU processing.  Torch was built with an aim to achieve maximum flexibility and make the process of building your models extremely simple. More recently, the Python implementation of Torch, called PyTorch, has found popularity and is gaining rapid adoption. PyTorch PyTorch is a Python package for building deep neural networks and performing complex tensor computations. While Torch uses Lua, PyTorch leverages the rising popularity of Python, to allow anyone with some basic Python programming language to get started with deep learning. PyTorch improves upon Torch’s architectural style and does not have any support for containers - which makes the entire deep modeling process easier and transparent to you. Still wondering how PyTorch and Torch are different from each other? Make sure you check out this interesting post on Quora. Deeplearning4j DeepLearning4j (or DL4J) is a popular deep learning framework developed in Java and supports other JVM languages as well. It is very slick and is very widely used as a commercial, industry-focused distributed deep learning platform. The advantage of using DL4j is that you can bring together the power of the whole Java ecosystem to perform efficient deep learning, as it can be implemented on top of the popular Big Data tools such as Apache Hadoop and Apache Spark. [box type="success" align="" class="" width=""]If Java is your programming language of choice, then you should definitely check out this framework. It is clean, enterprise-ready, and highly effective. If you’re planning to deploy your deep learning models to production, this tool can certainly be of great worth![/box] MXNet MXNet is one of the most languages-supported deep learning frameworks, with support for languages such as R, Python, C++ and Julia. This is helpful because if you know any of these languages, you won’t need to step out of your comfort zone at all, to train your deep learning models. Its backend is written in C++ and cuda, and is able to manage its own memory like Theano. MXNet is also popular because it scales very well and is able to work with multiple GPUs and computers, which makes it very useful for the enterprises. This is also one of the reasons why Amazon made MXNet its reference library for Deep Learning too. In November, AWS announced the availability of ONNX-MXNet, which is an open source Python package to import ONNX (Open Neural Network Exchange) deep learning models into Apache MXNet. Read why MXNet is a versatile deep learning framework here. Microsoft Cognitive Toolkit Microsoft Cognitive Toolkit, previously known by its acronym CNTK, is an open-source deep learning toolkit to train deep learning models. It is highly optimized and has support for languages such as Python and C++. Known for its efficient resource utilization, you can easily implement efficient Reinforcement Learning models or Generative Adversarial Networks (GANs) using the Cognitive Toolkit. It is designed to achieve high scalability and performance and is known to provide high-performance gains when compared to other toolkits like Theano and Tensorflow when running on multiple machines. Here is a fun comparison of TensorFlow versus CNTK, if you would like to know more. deeplearn.js Gone are the days when you required serious hardware to run your complex machine learning models. With deeplearn.js, you can now train neural network models right on your browser! Originally developed by the Google Brain team, deeplearn.js is an open-source, JavaScript-based deep learning library which runs on both WebGL 1.0 and WebGL 2.0. deeplearn.js is being used today for a variety of purposes - from education and research to training high-performance deep learning models. You can also run your pre-trained models on the browser using this library. BigDL BigDL is distributed deep learning library for Apache Spark and is designed to scale very well. With the help of BigDL, you can run your deep learning applications directly on Spark or Hadoop clusters, by writing them as Spark programs. It has a rich deep learning support and uses Intel’s Math Kernel Library (MKL) to ensure high performance. Using BigDL, you can also load your pre-trained Torch or Caffe models into Spark. If you want to add deep learning functionalities to a massive set of data stored on your cluster, this is a very good library to use. [box type="shadow" align="" class="" width=""]Editor's Note: We have removed Theano and Lasagne from the original list due to the Theano retirement announcement. RIP Theano! Before Tensorflow, Caffe or PyTorch came to be, Theano was the most widely used library for deep learning. While it was a low-level library supporting CPU as well as GPU computations, you could wrap it with libraries like Keras to simplify the deep learning process. With the release of version 1.0, it was announced that the future development and support for Theano would be stopped. There would be minimal maintenance to keep it working for the next one year, after which even the support activities on the library would be suspended completely. “Supporting Theano is no longer the best way we can enable the emergence and application of novel research ideas”, said Prof. Yoshua Bengio, one of the main developers of Theano. Thank you Theano, you will be missed! Goodbye Lasagne Lasagne is a high-level deep learning library that runs on top of Theano.  It has been around for quite some time now and was developed with the aim of abstracting the complexities of Theano, and provide a more friendly interface to the users to build and train neural networks. It requires Python and finds many similarities to Keras, which we just saw above. However, if we are to find differences between the two, Keras is faster and has a better documentation in place.[/box] There are many other deep learning libraries and frameworks available for use today – DSSTNE, Apache Singa, Veles are just a few worth an honorable mention. Which deep learning frameworks will best suit your needs? Ultimately, it depends on a number of factors. If you want to get started with deep learning, your safest bet would be to use a Python-based framework like Tensorflow, which are quite popular. For seasoned professionals, the efficiency of the trained model, ease of use, speed and resource utilization are all important considerations for choosing the best deep learning framework.
Read more
  • 0
  • 0
  • 63122

article-image-introduction-titanic-datasets
Packt
09 May 2017
11 min read
Save for later

Introduction to Titanic Datasets

Packt
09 May 2017
11 min read
In this article by Alexis Perrier, author of the book Effective Amazon Machine Learning says artificial intelligence and big data have become a ubiquitous part of our everyday lives; cloud-based machine learning services are part of a rising billion-dollar industry. Among the several such services currently available on the market, Amazon Machine Learning stands out for its simplicity. Amazon Machine Learning was launched in April 2015 with a clear goal of lowering the barrier to predictive analytics by offering a service accessible to companies without the need for highly skilled technical resources. (For more resources related to this topic, see here.) Working with datasets You cannot do predictive analytics without a dataset. Although we are surrounded by data, finding datasets that are adapted to predictive analytics is not always straightforward. In this section, we present some resources that are freely available. The Titanic datasetis a classic introductory datasets for predictive analytics. Finding open datasets There is a multitude of dataset repositories available online, from local to global public institutions to non-profit and data-focused start-ups. Here’s a small list of open dataset resources that are well suited forpredictive analytics. This, by far, is not an exhaustive list. This thread on Quora points to many other interesting data sources:https://www.quora.com/Where-can-I-find-large-datasets-open-to-the-public.You can also ask for specific datasets on Reddit at https://www.reddit.com/r/datasets/. The UCI Machine Learning Repository is a collection of datasets maintained by UC Irvine since 1987, hosting over 300 datasets related to classification, clustering, regression, and other ML tasks Mldata.org from the University of Berlinor the Stanford Large Network Dataset Collection and other major universities alsooffer great collections of open datasets Kdnuggets.com has an extensive list of open datasets at http://www.kdnuggets.com/datasets Data.gov and other US government agencies;data.UN.org and other UN agencies AWS offers open datasets via partners at https://aws.amazon.com/government-education/open-data/. The following startups are data centered and give open access to rich data repositories: Quandl and quantopian for financial datasets Datahub.io, Enigma.com, and Data.world are dataset-sharing sites Datamarket.com is great for time series datasets Kaggle.com, the data science competition website, hosts over 100 very interesting datasets AWS public datasets:AWS hosts a variety of public datasets,such as the Million Song Dataset, the mapping of the Human Genome, the US Census data as well as many others in Astrology, Biology, Math, Economics, and so on. These datasets are mostly available via EBS snapshots although some are directly accessible on S3. The datasets are large, from a few gigabytes to several terabytes, and are not meant to be downloaded on your local machine; they are only to be accessible via an EC2 instance (take a look at http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-public-data-sets.htmlfor further details).AWS public datasets are accessible at https://aws.amazon.com/public-datasets/. Introducing the Titanic dataset We will use the classic Titanic dataset. The dataconsists of demographic and traveling information for1,309 of the Titanic passengers, and the goal isto predict the survival of these passengers. The full Titanic dataset is available from the Department of Biostatistics at the Vanderbilt University School of Medicine (http://biostat.mc.vanderbilt.edu/wiki/pub/Main/DataSets/titanic3.csv)in several formats. The Encyclopedia Titanica website (https://www.encyclopedia-titanica.org/) is the website of reference regarding the Titanic. It contains all the facts, history, and data surrounding the Titanic, including a full list of passengers and crew members. The Titanic datasetis also the subject of the introductory competition on Kaggle.com (https://www.kaggle.com/c/titanic, requires opening an account with Kaggle). You can also find a csv version in GitHub repository at https://github.com/alexperrier/packt-aml/blob/master/ch4. The Titanic data containsa mix of textual, Boolean, continuous, and categorical variables. It exhibits interesting characteristics such as missing values, outliers, and text variables ripe for text mining--a rich database that will allow us to demonstrate data transformations. Here’s a brief summary of the 14attributes: pclass: Passenger class (1 = 1st; 2 = 2nd; 3 = 3rd) survival: A Boolean indicating whether the passenger survived or not (0 = No; 1 = Yes); this is our target name: A field rich in information as it contains title and family names sex: male/female age: Age, asignificant portion of values aremissing sibsp: Number of siblings/spouses aboard parch: Number of parents/children aboard ticket: Ticket number. fare: Passenger fare (British Pound). cabin: Doesthe location of the cabin influence chances of survival? embarked: Port of embarkation (C = Cherbourg; Q = Queenstown; S = Southampton) boat: Lifeboat, many missing values body: Body Identification Number home.dest: Home/destination Take a look at http://campus.lakeforest.edu/frank/FILES/MLFfiles/Bio150/Titanic/TitanicMETA.pdf for more details on these variables. We have 1,309 records and 14 attributes, three of which we will discard. The home.dest attribute hastoo few existing values, the boat attribute is only present for passengers who have survived, and thebody attributeis only for passengers who have not survived. We will discard these three columnslater on while using the data schema. Preparing the data Now that we have the initial raw dataset, we are going to shuffle it, split it into a training and a held-out subset, and load it to an S3 bucket. Splitting the data In order to build and select the best model, we need to split the dataset into three parts: training, validation, and test, with the usual ratios being 60%, 20%, and 20%. The training and validation sets are used to build several models and select the best one while the test or held-out set, is used for the final performance evaluation on previously unseen data. Since Amazon ML does the job of splitting the dataset used for model training and model evaluation into a training and a validation subsets, we only need to split our initial dataset into two parts: the global training/evaluation subset (80%) for model building and selection, and the held-out subset (20%) for predictions and final model performance evaluation. Shuffle before you split:If you download the original data from the Vanderbilt University website,you will notice that it is ordered by pclass, the class of the passenger and by alphabetical order of the name column. The first 323 rows correspond to the 1st class followed by 2nd (277) and 3rd (709) class passengers. It is important to shuffle the data before you split it so that all the different variables have have similar distributions in each training and held-out subsets. You can shuffle the data directly in the spreadsheet by creating a new column, generating a random number for each row and then ordering by that column. On GitHub: You will find an already shuffledtitanic.csv file at https://github.com/alexperrier/packt-aml/blob/master/ch4/titanic.csv. In addition to shuffling the data, we have removed punctuation in the name column: commas, quotes, and parenthesis, which can add confusion when parsing a csv file. We end up with two files:titanic_train.csv with 1047 rows and titanic_heldout.csv with 263rows. These files are also available in the GitHub repo (https://github.com/alexperrier/packt-aml/blob/master/ch4). The next step is to upload these files on S3 so that Amazon ML can access them. Loading data on S3 AWS S3 is one of the main AWS services dedicated to hosting files and managing their access. Files in S3 can be public and open to the internet or have access restricted to specific users, roles, or services.S3 is also used extensively by AWS for operations such as storing log files or results (predictions, scripts, queries, and so on). Files in S3 are organized around the notion of buckets. Buckets are placeholders with unique names similar to domain names for websites. A file in S3 will have a unique locator URI: s3://bucket_name/{path_of_folders}/filename. The bucket name is unique across S3. In this section, we will create a bucket for our data, upload the titanic training file, and open its access to Amazon ML. Go to https://console.aws.amazon.com/s3/home, and open an S3 account if you don’t have one yet. S3 pricing:S3 charges for the total volume of files you host and the volume of file transfers depends on the region where the files are hosted. At time of writing, for less than 1TB, AWS S3 charges $0.03/GB per month in the US east region. All S3 prices are available at https://aws.amazon.com/s3/pricing/. See also http://calculator.s3.amazonaws.com/index.htmlfor the AWS cost calculator. Creating a bucket Once you have created your S3 account, the next step is to create a bucket for your files.Click on the Create bucket button: Choose a name and a region, since bucket names are unique across S3, you must choose a name for your bucket that has not been already taken. We chose the name aml.packt for our bucket, and we will use this bucket throughout. Regarding the region, you should always select a region that is the closest to the person or application accessing the files in order to reduce latency and prices. Set Versioning, Logging, and Tags, versioning will keep a copy of every version of your files, which prevents from accidental deletions. Since versioning and logging induce extra costs, we chose to disable them. Set permissions. Review and save. Loading the data To upload the data, simply click on the upload button and select the titanic_train.csv file we created earlier on. You should, at this point, have the training dataset uploaded to your AWS S3 bucket. We added a/data folder in our aml.packt bucket to compartmentalize our objects. It will be useful later on when the bucket will also contain folders created by S3. At this point, only the owner of the bucket (you) is able to access and modify its contents. We need to grant the Amazon ML service permissions to read the data and add other files to the bucket. When creating the Amazon ML datasource, we will be prompted to grant these permissions inthe Amazon ML console. We can also modify the bucket’s policy upfront. Granting permissions We need to edit the policy of the aml.packt bucket. To do so, we have to perform the following steps: Click into your bucket. Select the Permissions tab. In the drop down, select Bucket Policy as shown in the following screenshot. This will open an editor: Paste in the following JSON. Make sure to replace {YOUR_BUCKET_NAME} with the name of your bucket and save: { “Version”: “2012-10-17”, “Statement”: [ { “Sid”: “AmazonML_s3:ListBucket”, “Effect”: “Allow”, “Principal”: { “Service”: “machinelearning.amazonaws.com” }, “Action”: “s3:ListBucket”, “Resource”: “arn:aws:s3:::{YOUR_BUCKET_NAME}”, “Condition”: { “StringLike”: { “s3:prefix”: “*” } } }, { “Sid”: “AmazonML_s3:GetObject”, “Effect”: “Allow”, “Principal”: { “Service”: “machinelearning.amazonaws.com” }, “Action”: “s3:GetObject”, “Resource”: “arn:aws:s3:::{YOUR_BUCKET_NAME}/*” }, { “Sid”: “AmazonML_s3:PutObject”, “Effect”: “Allow”, “Principal”: { “Service”: “machinelearning.amazonaws.com” }, “Action”: “s3:PutObject”, “Resource”: “arn:aws:s3:::{YOUR_BUCKET_NAME}/*” } ] } Further details on this policy are available at http://docs.aws.amazon.com/machine-learning/latest/dg/granting-amazon-ml-permissions-to-read-your-data-from-amazon-s3.html. Once again, this step is optional since Amazon ML will prompt you for access to the bucket when you create the datasource. Formatting the data Amazon ML works on comma separated values files (.csv)--a very simple format where each rowis an observation and each column is a variable or attribute. There are, however, a few conditionsthat shouldbe met: The data must be encoded in plain text using a character set, such asASCII, Unicode, or EBCDIC All values must be separated by commas; if a value contains a comma, it should be enclosed by double quotes Each observation (row) must be smaller than 100k There are also conditions regarding end of line characters that separate rows. Special care must be taken when using Excel on OS X (Mac) as explained on this page: http://docs.aws.amazon.com/machine-learning/latest/dg/understanding-the-data-format-for-amazon-ml.html What about other data file formats? Unfortunately, Amazon ML datasource are only compatible with csv files and Redshift databases and does not accept formats such as JSON, TSV, or XML. However, other services such as Athena, a serverless database service, do accept a wider range of formats. Summary In this article we learnt about how to use and work around with datasets using Amazon web services and Titanic datasets. We also learnt how prepare data and Amazon S3 services.  Resources for Article: Further resources on this subject: Processing Massive Datasets with Parallel Streams – the MapReduce Model [article] Processing Next-generation Sequencing Datasets Using Python [article] Combining Vector and Raster Datasets [article]
Read more
  • 0
  • 1
  • 17111

article-image-active-directory-domain-services-2016
Packt
09 May 2017
23 min read
Save for later

Active Directory Domain Services 2016

Packt
09 May 2017
23 min read
In this article, by Dishan Francis, the author of the book Mastering Active Directory, we will see AD DS features, privileged access management, time based group memberships. Microsoft, released Active Directory domain services 2016 at a very interesting time in technology. Today identity infrastructure requirements for enterprise are challenging, most of the companies uses cloud services for their operations (Software as a Service—SaaS) and lots moved infrastructure workloads to public clouds. (For more resources related to this topic, see here.) AD DS 2016 features Active Directory domain service (AD DS) improvements are bind with its forest and domain functional levels. Upgrading operating system or adding domain controllers which runs Windows Server 2016 to existing AD infrastructure not going to upgrade forest and domain functional levels. In order to use or test these new AD DS 2016 features you need to have forest and domain function levels set to Windows Server 2016. The minimum forest and domain functional levels you can run on your identity infrastructure depend on the lowest domain controller version running. For example, if you have Windows Server 2008 domain controller in your infrastructure, even though you add Windows Server 2016 domain controller, the domain and forest functional level need to maintain as Windows Server 2008 until last Windows Server 2008 demote from the infrastructure. Privileged access management Privileged access management (PAM) is one of the best topics which is discussed on presentations, tech shows, IT forums, IT groups, blogs and meetings for last few years (after 2014) around identity management. It has become a trending topic especially after the Windows Server 2016 previews released. For last year, I was travelling to countries, cities and had involved with many presentations, discussions about PAM.  First of all, this is not a feature that you can enable with few clicks. It is a combination of many technologies and methodologies which came together and make a workflow or in other words way of living for administrators. AD DS 2016 includes features and capabilities that support PAM in infrastructure but it is not the only thing. This is one of the greatest challenge I see about this new way of thinking and new way of working. Replacing a product is easy but changing a process is more complicated and challenging.   I started my career with one of the largest north American hosting company around 2003. I was a system administrator that time and one of my tasks was to identify hacking attempts and prevent workloads getting compromised. In order to do that I had to review lot of logs on different systems. But around that time most of the attacks from individual or groups were to put names on websites and prove that they can hack websites. Average hacking attempts per server was around 20 to 50 per day. Some collocation customers were even running their websites, workloads without any protection (even though not recommended). But as the time goes year by year number of attempts were dramatically increased and we start to talk about hundreds of thousands attempts per day. The following graph is taken from latest Symantec Internet Security Threat Report (2016) and it confirms number of web-based attacks increased by more than 117% from year 2014.  Web attacks blocked per month (Source - Symantec Internet Security Threat Report (2016)) It has not only changed the numbers, it also changed the purpose of attacks. As I said in earlier days it was script kiddies who were after fame. Then later as users started to use more and more online services, purpose of attacks changed to financial values. Attackers started to focus on websites which stores credit card information. For last 10 years, I had to change my credit card 4 times as my credit card information were exposed along with the websites I had used it with. These type of attacks are still happening in the industry.  When considering the types of threats after the year 2012, most of the things changed. Instead of fame or financial, attackers started to target identities. In earlier days, the data about a person were in different formats. For example, when I used to walk into my medical center 15 years ago, before seeing the doctor, administration staff had to go and find the file containing my name. They had number of racks filled with files and papers which included patient records, treatment history, test reports, and so on. But now things have changed, when I walk in, no one in administration need to worry about the file. Doctor can see all my records from his computer screen with few clicks. So, the data is being transformed into the digital format. More and more data about people is transforming into digital formats. In that health system, I become an identity and my identity is attached to the data and also to a certain privileges. Think about your bank, online banking system. You got your own username and password to type in, when you log in to the portal. So, you have your own identity in the bank system. Once you log in, you can access all your accounts, transfer money, make payments. Bank has granted some privileges to your identity. With your privileges, you cannot look into your neighbor’s bank account. But your bank manager can view your account and your neighbor’s account too. That means the privileges attached to the bank manager’s identity is different. Amount of data which can be retrieved from systems are dependent on the identity privileges. Not only that, some of these identities are integrated with different systems. Industries use different systems related to their operations. It can be email system, CMS or billing system. Each of these systems hold data. To make operations smooth these systems are integrated with one identity infrastructure and provides single sign-on experience instead of using different identities for each and every application. It is making identities more and more powerful within any system. For an attacker, what is more worth? To focus on one system or target on identity which is attached to data and privileges to many different systems? Which one can make more damage? If the identity which is the target, has more privileged access to the systems, its a total disaster. Is it all about usernames, passwords or admin accounts? No it's not, identities can make more damage than that. Usernames and passwords are just making it easy. Just think about the recent world famous cyber-attacks. Back in July 2015, a group called The Impact Team threatened to expose user account information of Ashley Madison dating site, if its parent company Avid Life Media didn't shut down the Ashley Madison and Established Men websites completely. For example, Ashley Madison website hack, is it that the financial value made it more dangerous? It was the identities which made damages to people’s lives. It was just enough to expose the names and make someones life to be humiliated. It ruined families and children lost their parents love and care. It proves it’s not only about permissions attached to an identity, individual identities itself are more important in modern big data phenomenon. It’s only been few months from the USA presidential election and by now we can see how much news it can make with a single tweet. It wasn’t needed to have special privileges to do a tweet, it was the identity which made that tweet important. In other hand if that twitter account got hacked and someone tweeted something fake on behalf of the actual person who owns it, what kind of damage it can make to whole world? In order to do that, does it need to hack the Jack Dorsey’s account? Value  of individual identity is more powerful than twitter CEO. According to following latest reports, it shows that majority of information exposed by identity attacks, are people names, addresses, medical reports, and government identity numbers. Source - Symantec Internet Security Threat Report (2016) The attacks targeted on identities are rising day by day. The following graph shows the number of identities been exposed, compared to the number of incidents. Source - Symantec Internet Security Threat Report (2016) In December 2015, there were only 11 incidents and 195 million identities were exposed. It shows how much damage these types of attacks can make.  Each and every time this kind of attack happens, most common answers from engineers are “Those attacks were so sophisticated”, “It was too complex to identify”, “They were so clever”, “It was zero-day attack”. Is that really true?  Zero-days attacks are based on unknown system bugs, errors to vendors. Latest reports show the average time of explores are less than 7 days and 1 day to release to patch. Source - Symantec Internet Security Threat Report (2016) Microsoft Security Intelligence Report Volume 21 | January through June, 2016 report contains the following figure which explains the complexity of the vulnerabilities. It clearly shows the majority of the vulnerabilities are less complex to exploit. High complexity vulnerabilities are still less than 5% from total vulnerability disclosures. It proves the attackers are still after low hanging fruits. Source: Microsoft Security Intelligence Report Volume 21 | January through June, 2016 Microsoft Active Directory is the leader in identity infrastructure solution provider. With all this constant news about identity breaches, Microsoft Active Directory name also appears. Then people start to question why Microsoft can’t fix it? But if you analyse these problems, it’s obvious that just providing technology rich product is not enough to solve these issues. With each and every new server operating system version, Microsoft releases new Active Directory version. Every time it contains new features to improve the identity infrastructure security. But when I go for the Active Directory released project, I see a majority of engineers not even following the security best practices defined by 10 years’ older Active Directory version. Think about a car race, its categories are usually based on the engine power. It can be 1800cc, 2000cc or more. In the race, most of the time it's the same models and same manufactured cars. If it's same manufacture, and if it's same engine capacity how one can win and the other lose? It’s the car tuning and the driving skills which decide a winner and loser. If Active Directory domain service 2016 can fix all the identity threats that’s really good but giving a product or technology doesn’t seem to be work so far. That’s why we need to change the way we think towards identity infrastructure security. We should not forget we are fighting against human adversaries. The tactics, methods, approaches they use, are changing every day. The products we use, do not have such frequent updates but we can change their ability to execute an attack on infrastructure by understanding fundamentals and use the products, technologies, workflows to prevent it. Before we move into identity theft prevention mechanism let’s look into typical identity infrastructure attack. Microsoft Tiered administration model is based on three tiers. All these identity attacks are starting with gaining some kind of access to the identity infrastructure and then move laterally until they have keys to the kingdom which is domain admin or enterprise administrator credentials. Then they have full ownership of entire identity infrastructure. As the preceding diagram shows that the first step on identity attack, is to get some kind of access to the system. They do not target domain admin or enterprise admin account first. Getting access to a typical user account is much easier than domain admin account. All they need is some kind of beach head. For this, still the most common attack technique is to send out phishing email. It’s typical that someone will still fall for that and click on it. Now they have some sort of access to your identity infrastructure and next step is to start moving laterally to gain more privileges. How many of you completely eliminated local administrator accounts in your infrastructure? I’m sure the answer will be almost none. Sometimes, users are asked for software installations, system level modifications frequently in their systems and most of the time engineers are ending up assigning local administrator privileges. If the compromised account used to be local administrator its becomes extremely easy to move to the next level. If not, they will make systems to misbehave. Then who will come to the rescue? It's the super powered IT help-desk peoples. In lots of organizations, IT help-desk engineers are domain administrators. If not at least local administrators to the systems. So, once they receive the call about a misbehaving computer, they RDP or login locally using the privileged account. If you are using RDP, it always sends your credentials via clear text. If the attacker is running any password harvesting tool it's extremely easy to capture the credentials. You may think if account (which is compromised) is a typical user account how it can execute such programs. But Windows operating systems are not preventing users from running any application on its user context. It will not allow to change any system level settings but it will still allow to run scripts or user level executable. Once they gain access to some identity in organization, the next level of privileges to own will be Tier 1. This is where the application administrators, data administrators, SaaS application administrators accounts live. In today's infrastructures, we have too many administrators. Primarily we have domain admins, enterprise administrators, then we have local administrators. Different applications running on the infrastructure have its own administrators such as exchange administrators, SQL administrators, and SharePoint administrators. The other third-party applications such as CMS, billing portal may have its own administrators. If you are using cloud services, SaaS applications, it has another set of administrators. Are we really aware of activities happening on these accounts? Mostly engineers are only worrying about protecting domain admin accounts, but at the same time forgetting about the other kinds of administrators in the infrastructure. Some of these administrator roles can make more damage than domain admin to a business. These application and services are decentralizing the management in the organization. In order to move latterly with privileges, these attackers only need to log into a machine or server where these administrators used to log in.  Local Security Authority Subsystem Service(LSASS) stores credentials in its memory for active Windows sessions. This prevents users from entering credentials for each and every service they access. This also stores Kerberos tickets. This allows attackers to perform a pass of the hash attack and retrieve locally stored credentials. Decentralized management of admin accounts make this process easier. There are features, security best practices which can be used to prevent the pass of the hash attacks in identity infrastructure.  Another problem with these types of accounts is once it becomes service admin accounts, eventually its becomes domain admin or enterprise administrator accounts. I have seen engineers created service accounts and when they can’t figure out the exact permission required for the program, as an easy fix it will add to the domain admin group. It’s not only the infrastructure attack that can expose such credentials. Service admins are attached to the application too, compromise on application can also expose the identities. In such scenario, it will be easier for attackers to gain keys to the kingdom.  Tier 0 is where the domain admin, enterprise admins operates. This is what the ultimate goal for identity infrastructure attack, once they obtain access to Tier 0, it means they own your entire identity infrastructure. Latest reports show once there is initial breach, it only takes less than 48 hours to gain Tier 0 privileges. According to the reports, once they gain access it will take up to 7-8 months minimum to identify the breach. Because once they have highest privileges they can make backdoors, clean up logs and hide forever if needed. Systems we use, always treat administrators as trustworthy people. It’s no longer valid statement for modern world. How many times you check systems logs to see what your domain admins are doing? Even though engineers look for the logs for other users, majority rarely check about domain admin accounts. The same thing applies for internal security breach too, as I said most people are good but you never know. Most of world famous identity attacks have proved that already. When I have discussion with engineers and customers about identity infrastructure security, following are the common comments I hear, "We have too many administrator accounts" "We do not know how many administrator account we got" "We got fast changing IT teams, so it’s hard to manage permissions" "We do not have visibility over administrator accounts activities" "If there is identity infrastructure breach or attempt, how do we identify?" Answer for all of these is PAM. As I said in the beginning, this is not one product. It’s a workflow and a new way of working. Main components for this process is listed as follows: Apply pass-the-hash prevention features to existing identity infrastructure. Install Microsoft Advanced Threat Analytics to monitor the domain controller traffic to identify potential real-time identity infrastructure threats. Install and configure Microsoft Identity Manager 2016—this product is allowing to manage privilege access of existing Active Directory forest by providing task-based time limited privilege access.  What is it to do with AD DS 2016? AD DS 2016 is now allowing time based group membership which makes this whole process possible. Users will add to the groups with TTL value and once its expires, the user will be removed from the group automatically. For example, let’s assume your CRM application has administrator rights assign to CRM Admin security group. The users in this group only log into the system once a month to do some maintenance. But the admin rights for the members in that group remain untouched for 29 days—24x7. So, it gives enough opportunity for attackers to try and gain access to the privileged accounts during that time. But if it’s admin rights can be limited at least for the day it needed isn’t it more useful? Then we know majority of days in month, CRM application do not have risk of been compromised by an account in CRM Admin group. What is the logic behind PAM? PAM product is built, based on Just-In-Time (JIT) administration concept. Back in 2014, Microsoft release PowerShell tool kit which allows Just-Enough-Administration. Let’s assume you are running a web server in your infrastructure. As part of the operation, every month you need to collect some logs to make a report. You already setup a PowerShell script for it. Someone in your team need to log into the system and need to run it. In order to do that, it requires administration privileges. Using JEA, it is possible to assign required permissions for the user to run only that particular program. In that way, user doesn't need to be added to the domain admin group. User will not be allowed to run any other program with assigned permission and it will not apply for another computer either. JIT administration is bound with time. Users will have required privileges only when they need it. Users will not hold privileged access rights all the time. PAM operations can be divide in to 4 major steps: Source - https://docs.microsoft.com/en-gb/microsoft-identity-manager/pam/privileged-identity-management-for-active-directory-domain-services Prepare: First step is to identify the privileged access groups in your exciting Active Directory forest and start to remove users from those. You may also need to do certain changes in your application infrastructure to support this setup. For example, if you assign privileged access to user accounts instead of security groups (in applications or services) it will need to change. Then next step is to setup equivalent groups in bastion forest without any members. When setup MIM, it will use a bastion forest to manage privileged access in existing Active Directory forest. This is a special forest and it cannot use for other infrastructure operations. This forest running with minimum of Windows Server 2012 R2 Active Directory forest functional level. When identity infrastructure compromised and attackers gain access to Tier 0, they can hide their activities for months or years. How we can be sure our existing identity infrastructure is not compromised already? if we implement this to same forest it will not achieve its core targets. Also, domain upgrades are painful it need time and budget. But because of the bastion forest, this solution can be applied to your existing identity infrastructure with minimum changes.  Protect: Next step is in the list to setup a workflow for authentications and authorization. Define how user can request privileges access when they are required. It can be via MIM portal or existing support portal (with integrated MIM REST API). It is possible to setup system to use Multi-Factor authentications (MFA) during this request process to prevent any unauthorized activity. Also, its important to define how the requests will be handled. It can be automatic approval or manual approval process. Operate: Once privilege access request approved, the user account will be added to the security group in bastion forest. The group itself have a SID value. In both forests, the group will have exact same SID value. Therefore the application or service will not see a difference between two groups in two different forest. Once the permission is granted it is only valid for the time defined by the authorization policy. Once it reaches the time limit, the user account will be removed from the security group automatically. Monitor: PAM provides visibility over the privilege access requests. Each and every request, events will be recorded and it is possible to review and also generate reports for audit purposes. It helps to fine tune the process and also to identify potential threats.  Let’s see how it’s really works: REBELADMIN CORP. uses a CRM system for its operations. The application got administrator role and REBELADMIN/CRMAdmins security group assigned to it. Any member of that group will have administrator privileges to the application. Recently PAM been introduced to the REBELADMIN CORP. As an engineer, I have identified REBELADMIN/CRMAdmins as privileged group and going to protect it using PAM. The first step is to remove the members of the REBELADMIN/CRMAdmins group. After that I have setup same group in the bastion forest. Not only the name is same, but also both the groups got the same SID value 1984.  User Dennis used to be a member of the REBELADMIN/CRMAdmins group and was running monthly report. At the end of the month, he tried to run it and now figured he do not have the required permissions. Next step for him is to request the required permission via MIM Portal. According to the policies, as part of the request, system wants Dennis to use MFA. Once Dennis verifies the PIN number the request logs in the portal. As administrator, I received the alert about the request and I log into system to review the request. It's legitimate request and I approve his access to the system for 8 hours. Then the system automatically added the user account for Dennis into BASTION/CRMAdmins group. This group have the same SID value as the production group. Therefore, the member of BASTION/CRMAdmins group will be treated as administrator by CRM application. This group membership contains TTL value too. After it passes 8 hours from approval, Dennis’s account will be automatically removed from BASTION/CRMAdmins group. In this process, we didn’t add any member to the production security group which is REBELADMIN/CRMAdmins. So, production forest stay untouched and protected. In here the most important thing we need to understand is the legacy approach for identity protection is no longer valid. We are against human adversaries. Identity is our new perimeter in infrastructure and to protect it we need to understand how adversaries doing it and stay step ahead. The new PAM with AD DS 2016 is new approach to the right direction.  Time based group memberships Time based group membership is part of that boarder topic. This allows administrators to assign temporarily group membership which is expressed by Time-To-Live (TTL) value. This value will add to the Kerberos ticket. This is also called as Expiring-Link feature. When a user is assigned to a temporarily group membership, his login Kerberos ticket granting ticket (TGT) life time will be equal to lowest TTL value he has. For example, let’s assume you granted temporarily group membership to user A to be a member of domain admin group. It is only valid for 60 minutes. But user logged in only after 50 minutes from original assign and only have 10 minutes left to be a member of domain admin group. Based on that domain controller will issue TGT only valid for 10 minutes for user A.  This feature is not enabled by default. The reason for that is, to use this feature the forest function level must be Windows Server 2016. Also, once this feature is enabled, it cannot be disabled.  Let’s see how it works in real world: I have Windows domain controller installed and it is running with Windows Server 2016 forest functional level. It can be verified using the following PowerShell command: Get-ADForest | fl Name,ForestMode Then we need to enable the Expiring Link feature. It can be enabled using the following command: Enable-ADOptionalFeature ‘Privileged Access Management Feature’ -Scope ForestOrConfigurationSet -Target rebeladmin.com The rebeladmin.com link can be replaced with your FQDN: I have a user called Adam Curtiss to whom I need to assign Domain Admins group membership for 60 minutes: Get-ADGroupMember “Domain Admins” The preceding command will list the current member of domain admin group:  Next step is to add the user Adam Curtiss to the Domain Admins group for 60 minutes: Add-ADGroupMember -Identity ‘Domain Admins’ -Members ‘acurtiss’ -MemberTimeToLive (New-TimeSpan -Minutes 60)  Once its run, we can verify the TTL value remaining for the group membership using the following command:  Get-ADGroup ‘Domain Admins’ -Property member -ShowMemberTimeToLive Once I log in as the user and list the Kerberos ticket it shows the renew time with less than 60 minutes as I log in as user after few minutes of granting. Once the TGT renewal comes, the user will no longer be a member of Domain Admins group. Summary In this article we looked at the new features and enhancements that come with AD DS 2016. One of the biggest improvement was Microsoft's new approach towards the PAM. This is not just a feature that can be enabled via AD DS, it's just a part of the border solution. It helps to protect identity infrastructures from adversaries as traditional techniques and technologies no longer valid with rising threats. Resources for Article: Further resources on this subject: Deploying and Synchronizing Azure Active Directory [article] How to Recover from an Active Directory Failure [article] Active Directory migration [article]
Read more
  • 0
  • 0
  • 4810
article-image-building-strong-foundation
Packt
04 May 2017
28 min read
Save for later

Building a Strong Foundation

Packt
04 May 2017
28 min read
In this article, by Mickey Macdonald, author of the book Mastering C++ Game Development, we will cover how these libraries can work together and build some of the libraries needed to round out the structure. (For more resources related to this topic, see here.) To get started, we will focus on, arguably one of the most important aspects of any game project, the rendering system. Proper, performant implementations not only takes a significant amount of time, but it also takes specialized knowledge of video driver implementations and mathematics for computer graphics. Having said that, it is not, in fact, impossible to create a custom low-level graphics library yourself, it's just not overly recommended if your end goal is just to make video games. So instead of creating a low-level implementation themselves, most developers turn to a few different libraries to provide them abstracted access to the bare metal of the graphics device. We will be using a few different graphic APIs to help speed up the process and help provide coherence across platforms. These APIs include the following: OpenGL (https://www.opengl.org/): The Open Graphics Library (OpenGL) is an open cross-language, cross-platform application programming interface, or API, used for rendering 2D and 3D graphics. The API provides low-level access to the graphics processing unit (GPU). SDL (https://www.libsdl.org/): Simple DirectMedia Layer (SDL) is a cross-platform software development library designed to deliver a low-level hardware abstraction layer to multimedia hardware components. While it does provide its own mechanism for rendering, SDL can use OpenGL to provide full 3D rendering support. While these APIs save us time and effort by providing us some abstraction when working with the graphics hardware, it will quickly become apparent that the level of abstraction will not be high enough. You will need another layer of abstraction to create an efficient way of reusing these APIs in multiple projects. This is where the helper and manager classes come in. These classes will provide the needed structure and abstraction for us and other coders. They will wrap all the common code needed to set up and initialize the libraries and hardware. The code that is required by any project regardless of gameplay or genre can be encapsulated in these classes and will become part of the "engine." In this article, we will cover the following topics: Building helper classes Encapsulation with managers Creating interfaces Building helper classes In object-oriented programming, a helper class is used to assist in providing some functionality, which is not, directly the main goal of the application in which it is used. Helper classes come in many forms and are often a catch-all term for classes that provide functionality outside of the current scope of a method or class. Many different programming patterns make use of helper classes. In our examples, we too will make heavy use of helper classes. Here is just one example. Let's take a look at the very common set of steps used to create a Window. It's safe to say that most of the games you will create will have some sort of display and will generally be typical across different targets, in our case Windows and the macOS. Having to retype the same instructions constantly over and over for each new project seems like kind of a waste. That sort of situation is perfect for abstracting away in a helper class that will eventually become part of the engine itself. The code below is the header for the Window class included in the demo code examples. To start, we have a few necessary includes, SDL, glew which is a Window creation helper library, and lastly, the standard string class is included: #pragma once #include <SDL/SDL.h> #include <GL/glew.h> #include <string> Next, we have an enum WindowFlags. We use this for setting some bitwise operations to change the way the window will be displayed; invisible, full screen, or borderless. You will notice that I have wrapped the code in the namespace BookEngine, this is essential for keeping naming conflicts from happening and will be very helpful once we start importing our engine into projects: namespace BookEngine { enum WindowFlags //Used for bitwise passing { INVISIBLE = 0x1, FULLSCREEN = 0x2, BORDERLESS = 0x4 }; Now we have the Window class itself. We have a few public methods in this class. First the default constructor and destructor. It is a good idea to include a default constructor and destructor even if they are empty, as shown here, despite the compiler, including its own, these specified ones are needed if you plan on creating intelligent or managed pointers, such as unique_ptr, of the class: class Window { public: Window(); ~Window(); Next we have the Create function, this function will be the one that builds or creates the window. It takes a few arguments for the creation of the window such as the name of the window, screen width and height, and any flags we want to set, see the previously mentioned enum. void Create(std::string windowName, int screenWidth, int screenHeight, unsigned int currentFlags); Then we have two getter functions. These functions will just return the width and height respectively: int GetScreenWidth() { return m_screenWidth; } int GetScreenHeight() { return m_screenHeight; } The last public function is the SwapBuffer function; this is an important function that we will take a look at in more depth shortly. void SwapBuffer(); To close out the class definition, we have a few private variables. The first is a pointer to a SDL_Window* type, named appropriate enough m_SDL_Window. Then we have two holder variables to store the width and height of our screen. This takes care of the definition of the new Window class, and as you can see it is pretty simple on face value. It provides easy access to the creation of the Window without the developer calling it having to know the exact details of the implementation, which is one aspect that makes Object Orientated Programming and this method is so powerful: private: SDL_Window* m_SDL_Window; int m_screenWidth; int m_screenHeight; }; } To get a real sense of the abstraction, let's walk through the implementation of the Window class and really see all the pieces it takes to create the window itself. #include "Window.h" #include "Exception.h" #include "Logger.h" namespace BookEngine { Window::Window() { } Window::~Window() { } The Window.cpp files starts out with the need includes, of course, we need to include Window.h, but you will also note we need to include the Exception.h and Logger.h header files also. These are two other helper files created to abstract their own processes. The Exception.h file is a helper class that provides an easy-to-use exception handling system. The Logger.h file is a helper class that as its name says, provides an easy-to-use logging system. After the includes, we again wrap the code in the BookEngine namespace and provide the empty constructor and destructor for the class. The Create function is the first to be implemented. In this function are the steps needed to create the actual window. It starts out setting the window display flags using a series of if statements to create a bitwise representation of the options for the window. We use the enum we created before to make this easier to read for us humans. void Window::Create(std::string windowName, int screenWidth, int screenHeight, unsigned int currentFlags) { Uint32 flags = SDL_WINDOW_OPENGL; if (currentFlags & INVISIBLE) { flags |= SDL_WINDOW_HIDDEN; } if (currentFlags & FULLSCREEN) { flags |= SDL_WINDOW_FULLSCREEN_DESKTOP; } if (currentFlags & BORDERLESS) { flags |= SDL_WINDOW_BORDERLESS; } After we set the window's display options, we move on to using the SDL library to create the window. As I mentioned before, we use libraries such as SDL to help us ease the creation of such structures. We start out wrapping these function calls in a Try statement; this will allow us to catch any issues and pass it along to our Exception class as we will see soon: try { //Open an SDL window m_SDL_Window = SDL_CreateWindow(windowName.c_str(), SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, screenWidth, screenHeight, flags); The first line sets the private member variable m_SDL_Window to a newly created window using the passed in variables, for the name, width, height, and any flags. We also set the default window's spawn point to the screen center by passing the SDL_WINDOWPOS_CENTERED define to the function. if (m_SDL_Window == nullptr) throw Exception("SDL Window could not be created!"); After we have attempted to create the window, it is a good idea to check and see if the process did succeed. We do this with a simple if statement and check to see if the variable m_SDL_Window is set to a nullptr; if it is, we throw an Exception. We pass the Exception the string "SDL Window could not be created!". This is the error message that we can then print out in a catch statement. Later on, we will see an example of this. Using this method, we provide ourselves some simple error checking. Once we have created our window and have done some error checking, we can move on to setting up a few other components. One of these components is the OpenGL library which requires what is referred to as a context to be set. An OpenGL context can be thought of as a set of states that describes all the details related to the rendering of the application. The OpenGLcontext must be set before any drawing can be done. One problem is that creating a window and an OpenGL context is not part of the OpenGL specification itself. What this means is that every platform can handle this differently. Luckily for us, the SDL API again abstracts the heavy lifting for us and allows us to do this all in one line of code. We create a SDL_GLContext variable named glContext. We then assign glContext to the return value of the SDL_GL_CreateContext function that takes one argument, the SDL_Window we created earlier. After this we, of course, do a simple check to make sure everything worked as intended, just like we did earlier with the window creation: //Set up our OpenGL context SDL_GLContext glContext = SDL_GL_CreateContext(m_SDL_Window); if (glContext == nullptr) throw Exception("SDL_GL context could not be created!"); The next component we need to initialize is GLEW. Again this is abstracted for us to one simple command, glewInit(). This function takes no arguments but does return an error status code. We can use this status code to perform a similar error check like we did with the window and OpenGL. This time instead checking it against the defined GLEW_OK. If it evaluates to anything other than GLEW_OK, we throw an Exception to be caught later on. //Set up GLEW (optional) GLenum error = glewInit(); if (error != GLEW_OK) throw Exception("Could not initialize glew!"); Now that the needed components are initialized, now is a good time to log some information about the device running the application. You can log all kinds of data about the device which can provide valuable insights when trying to track down obscure issues. In this case, I am polling the system for the version of OpenGL that is running the application and then using the Logger helper class printing this out to a "Runtime" text file: //print some log info std::string versionNumber = (const char*)glGetString(GL_VERSION); WriteLog(LogType::RUN, "*** OpenGL Version: " + versionNumber + "***"); Now we set the clear color or the color that will be used to refresh the graphics card. In this case, it will be the background color of our application. The glClearColor function takes four float values that represent the red, green, blue, and alpha values in a range of 0.0 to 1.0. Alpha is the transparency value where 1.0f is opaque, and 0.0f is completely transparent. //Set the background color to blue glClearColor(0.0f, 0.0f, 1.0f, 1.0f); The next line sets the VSYNC value, which is a mechanism that will attempt to match the application's framerate to that of the physical display. The SDL_GL_SetSwapInterval function takes one argument, an integer that can be 1 for on or 0 for off. //Enable VSYNC SDL_GL_SetSwapInterval(1); The last two lines that make up the try statement block, enable blending and set the method used when performing alpha blending. For more information on these specific functions, check out the OpenGL development documents: //Enable alpha blend glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); } After our try block, we now have to include the catch block or blocks. This is where we will capture any of the thrown errors that have occurred. In our case, we are just going to grab all the Exceptions. We use the WriteLog function from the Logger helper class to add the exception message, e.reason to the error log text file. This is a very basic case, but of course, we could do more here, possibly even recover from an error if possible. catch (Exception e) { //Write Log WriteLog(LogType::ERROR, e.reason); } } Finally, the last function in the Window.cpp file is the SwapBuffer function. Without going too deep on the implementation, what swapping buffers does is exchange the front and back buffers of the GPU. This in a nutshell allows smoother drawing to the screen. It is a complicated process that again has been abstracted by the SDL library. Our SwapBuffer function, abstracts this process again so that when we want to swap the buffers we simply call SwapBuffer instead of having to call the SDL function and specify the window, which is what is exactly done in the function. void Window::SwapBuffer() { SDL_GL_SwapWindow(m_SDL_Window); } } So as you can see, building up these helper functions can go a long way in making the process of development and iteration much quicker and simpler. Next, we will look at another programming method that again abstracts the heavy lifting from the developer's hands and provides a form of control over the process, a management system. Encapsulation with managers When working with complex systems such as input and audio systems, it can easily become tedious and unwieldy to control and check each state and other internals of the system directly. This is where the idea of the "Manager" programming pattern comes in. Using abstraction and polymorphism we can create classes that allow us to modularize and simplify the interaction with these systems. Manager classes can be found in many different use cases. Essentially if you see a need to have structured control over a certain system, this could be a candidate for a Manger class. Stepping away from the rendering system for a second, let's take a look at a very common task that any game will need to perform, handling input. Since every game needs some form of input, it only makes sense to move the code that handles this to a class that we can use over and over again. Let's take a look at the InputManager class, starting with the header file: #pragma once #include <unordered_map> #include <glm/glm.hpp> namespace BookEngine { class InputManager { public: InputManager(); ~InputManager(); The InputManager class starts just like the others, we have the includes needed and again we wrap the class in the BookEngine namespace for convince and safety. The standard constructor and destructor are also defined. Next, we have a few more public functions. First the Update function, which will not surprisingly update the input system. Then we have the KeyPressed and KeyReleased functions, these functions both take an integer value corresponding to a keyboard key. The following functions fire off when the key is pressed or released respectively. void Update(); void KeyPress(unsigned int keyID); void KeyRelease(unsigned int keyID); After the KeyPress and KeyRelease functions, we have two more key related functions the isKeyDown and isKeyPressed . Like the KeyPress and KeyRelease functions the isKeyDown and isKeyPressed functions take integer values that correspond to keyboard keys. The noticeable difference is that these functions return a Boolean value based on the status of the key. We will see more about this in the implementation file coming up. bool isKeyDown(unsigned int keyID); //Returns true if key is held bool isKeyPressed(unsigned int keyID); //Returns true if key was pressed this update The last two public functions in the InputManager class are SetMouseCoords and GetMouseCoords which do exactly as the names suggest and set or get the mouse coordinates respectively. void SetMouseCoords(float x, float y); glm::vec2 GetMouseCoords() const { return m_mouseCoords; }; Moving on to the private members and functions, we have a few variables declared to store some information about the keys and mouse. First, we have a Boolean value that stores the state of the key being pressed down or not. Next, we have two unordered maps that will store the current keymap and previous keymaps. The last value we store is the mouse coordinates. We us a vec2 construct from another helper library the Graphic Library Math library or GLM. We use this vec2, which is just a two-dimensional vector, to store the x and y coordinate values of the mouse cursor since it is on a 2D plane, the screen. If you are looking for a refresher on vectors and the Cartesian coordinate system, I highly recommend the Beginning Math Concepts for Game Developers book by Dr. John P Flynt: private: bool WasKeyDown(unsigned int keyID); std::unordered_map<unsigned int, bool> m_keyMap; std::unordered_map<unsigned int, bool> m_previousKeyMap; glm::vec2 m_mouseCoords; }; } Now let's look at the implementation, the InputManager.cpp file. Again we start out with the includes and the namespace wrapper. Then we have the constructor and destructor. The highlight to note here is the setting of the m_mouseCoords to 0.0f in the constructor: namespace BookEngine { InputManager::InputManager() : m_mouseCoords(0.0f) { } InputManager::~InputManager() { } Next is the Update function. This is a simple update where we are stepping through each key in the key map and copying it over to the previous key map holder: m_previousKeyMap. void InputManager::Update() { for (auto& iter : m_keyMap) { m_previousKeyMap[iter.first] = iter.second; } } The next function is the KeyPress function. In this function, we use the trick of an associative array to test and insert the key pressed whichmatches the ID passed in. The trick is that if the item located at the index of the keyID index does not exist, it will automatically be created: void InputManager::KeyPress(unsigned int keyID) { m_keyMap[keyID] = true; } . We do the same for the KeyRelease function below. void InputManager::KeyRelease(unsigned int keyID) { m_keyMap[keyID] = false; } The KeyRelease function is the same setup as the KeyPressed function, except that we are setting the keyMap item at the keyID index to false. bool InputManager::isKeyDown(unsigned int keyID) { auto key = m_keyMap.find(keyID); if (key != m_keyMap.end()) return key->second; // Found the key return false; } After the KeyPress and KeyRelease functions, we implement the isKeyDown and isKeyPressed functions. First the isKeydown function; here we want to test if a key is already pressed down. In this case, we take a different approach to testing the key than in the KeyPress and KeyRelease functions and avoid the associative array trick. This is because we don't want to create a key if it does not already exist, so instead, we do it manually: bool InputManager::isKeyPressed(unsigned int keyID) { if(isKeyDown(keyID) && !m_wasKeyDown(keyID)) { return true; } return false; } The isKeyPressed function is quite simple. Here we test to see if the key that matches the passed in ID is pressed down, by using the isKeyDown function, and that it was not already pressed down by also passing the ID to m_wasKeyDown. If both of these conditions are met, we return true, or else we return false. Next, we have the WasKeyDown function, much like the isKeyDown function, we do a manual lookup to avoid accidentally creating the object using the associative array trick: bool InputManager::WasKeyDown(unsigned int keyID) { auto key = m_previousKeyMap.find(keyID); if (key != m_previousKeyMap.end()) return key->second; // Found the key return false; } The final function in the InputManger is SetMouseCoords. This is a very simple "setter" function that takes the passed in floats and assigns them to the x and y members of the two-dimensional vector, m_mouseCoords. void InputManager::SetMouseCoords(float x, float y) { m_mouseCoords.x = x; m_mouseCoords.y = y; } } Creating interfaces Sometimes you are faced with a situation where you need to describe capabilities and provide access to general behaviors of a class without committing to a particular implementation. This is where the idea of interfaces or abstract classes comes into play. Using interfaces provides a simple base class that other classes can then inherit from without having to worry about the intrinsic details. Building strong interfaces can enable rapid development by providing a standard class to interact with. While interfaces could, in theory, be created of any class, it is more common to see them used in situations where the code is commonly being reused. Let's take a look at an interface from the example code in the repository. This interface will provide access to the core components of the Game. I have named this class IGame, using the prefix I to identify this class as an interface. The following is the implementation beginning with the definition file IGame.h. To begin with, we have the needed includes and the namespace wrapper. You will notice that the files we are including are some of the ones we just created. This is a prime example of the continuation of the abstraction. We use these building blocks to continue to build the structure that will allow this seamless abstraction: #pragma once #include <memory> #include "BookEngine.h" #include "Window.h" #include "InputManager.h" #include "ScreenList.h" namespace BookEngine { Next, we have a forward declaration. This declaration is for another interface that has been created for screens. The full source code to this interface and its supporting helper classes are available in the code repository. class IScreen;using forward declarations like this is a common practice in C++. If the definition file only requires the simple definition of a class, not adding the header for that class will speed up compile times. Moving onto the public members and functions, we start off the constructor and destructor. You will notice that this destructor in this case is virtual. We are setting the destructor as virtual to allow us to call delete on the instance of the derived class through a pointer. This is handy when we want our interface to handle some of the cleanup directly as well. class IGame { public: IGame(); virtual ~IGame(); Next we have declarations for the Run function and the ExitGame function. void Run(); void ExitGame(); We then have some pure virtual functions, OnInit, OnExit, and AddScreens. Pure virtual functions are functions that must be overridden by the inheriting class. By adding the =0; to the end of the definition, we are telling the compiler that these functions are purely virtual. When designing your interfaces, it is important to be cautious when defining what functions must be overridden. It's also very important to note that having pure virtual function implicitly makes the class it is defined for abstract. Abstract classes cannot be instantiated directly because of this and any derived classes need to implement all inherited pure virtual functions. If they do not, they too will become abstract. virtual void OnInit() = 0; virtual void OnExit() = 0; virtual void AddScreens() = 0; After our pure virtual function declarations, we have a function OnSDLEvent which we use to hook into the SDL event system. This provides us support for our input and other event-driven systems: void OnSDLEvent(SDL_Event& event); The public function in the IGame interface class is a simple helper function GetFPS that returns the current FPS. Notice the const modifiers, they identify quickly that this function will not modify the variable's value in any way: declarations const float GetFPS() const { return m_fps; } In our protected space, we start with a few function declarations. First is the Init or initialization function. This will be the function that handles a good portion of the setup. Then we have two virtual functions Update and Draw. Like pure virtual functions, a virtual function is a function that can be overridden by a derived class's implementation. Unlike a pure virtual function, the virtual function does not make the class abstract by default and does not have to be overridden. Virtual and pure virtual functions are keystones of polymorphic design. You will quickly see their benefits as you continue your development journey: protected: bool Init(); virtual void Update(); virtual void Draw(); To close out the IGame definition file, we have a few members to house different objects and values. I am not going to go through these line by line since I feel they are pretty self-explanatory: declarations std::unique_ptr<ScreenList> m_screenList = nullptr; IGameScreen* m_currentScreen = nullptr; Window m_window; InputManager m_inputManager; bool m_isRunning = false; float m_fps = 0.0f; }; } Now that we have taken a look at the definition of our interface class, let's quickly walk through the implementation. The following is the IGame.cpp file. To save time and space, I am going to highlight the key points. For the most part, the code is self-explanatory, and the source located in the repository is well commented for more clarity: #include "IGame.h" #include "IScreen.h" #include "ScreenList.h" #include "Timing.h" namespace BookEngine { IGame::IGame() { m_screenList = std::make_unique<ScreenList>(this); } IGame::~IGame() { } Our implementation starts out with the constructor and destructor. The constructor is simple, its only job is to add a unique pointer of a new screen using this IGame object as the argument to pass in. See the IScreen class for more information on screen creation. Next, we have the implementation of the Run function. This function, when called will set the engine in motion. Inside the function, we do a quick check to make sure we have already initialized our object. We then use yet another helper class, FPSlimiter, to set the max fps that our game can run. After that, we set the isRunning boolean value to true, which we then use to control the game loop: void IGame::Run() { if (!Init()) return; FPSLimiter fpsLimiter; fpsLimiter.SetMaxFPS(60.0f); m_isRunning = true; Next is the game loop. In the game loop, we do a few simple calls. First, we start the fps limiter. We then call the update function on our input manager. It is a good idea always to check input before doing other updates or drawing since their calculations are sure to use the new input values. After we update the input manager, we recursively call our Update and Draw class, which we will see shortly. We close out the loop by ending the fpsLimiter function and calling SwapBuffer on the Window object. ///Game Loop while (m_isRunning) { fpsLimiter.Begin(); m_inputManager.Update(); Update(); Draw(); m_fps = fpsLimiter.End(); m_window.SwapBuffer(); } } The next function we implement is the ExitGame function. Ultimately, this will be the function that will be called on the final exit of the game. We close out, destroy, and free up any memory that the screen list has created and set the isRunning Boolean to false, which will put an end to the loop: void IGame::ExitGame() { m_currentScreen->OnExit(); if (m_screenList) { m_screenList->Destroy(); m_screenList.reset(); //Free memory } m_isRunning = false; } Next up is the Init function. This function will initialize all the internal object settings and call the initialization on the connected systems. Again, this is an excellent example of OOP or object orientated programming and polymorphism. Handling initialization in this manner allows the cascading effect, keeping the code modular and easier to modify: bool IMainGame::Init() { BookEngine::Init(); SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1); m_window.Create("BookEngine", 1024, 780, 0); OnInit(); AddScreens(); m_currentScreen = m_screenList->GetCurrentScreen(); m_currentScreen->OnEntry(); m_currentScreen->Run(); return true; } Next, we have the update function. In this Update function, we create a structure to allow us to execute certain code based on a state that the current screen is in. We accomplish this using a simple Switch case method with the enumerated elements of the ScreenState type as the cases. This setup is considered a simple finite state machine and is a very powerful design method used throughout game development: void IMainGame::Update() { if (m_currentScreen) { switch (m_currentScreen->GetScreenState()) { case ScreenState::RUNNING: m_currentScreen->Update(); break; case ScreenState::CHANGE_NEXT: m_currentScreen->OnExit(); m_currentScreen = m_screenList->MoveToNextScreen(); if (m_currentScreen) { m_currentScreen->Run(); m_currentScreen->OnEntry(); } break; case ScreenState::CHANGE_PREVIOUS: m_currentScreen->OnExit(); m_currentScreen = m_screenList->MoveToPreviousScreen(); if (m_currentScreen) { m_currentScreen->Run(); m_currentScreen->OnEntry(); } break; case ScreenState::EXIT_APP: ExitGame(); break; default: break; } } else { //we have no screen so exit ExitGame(); } } After our Update, we implement the Draw function. In our function, we only do a couple of things. First, we reset the Viewport as a simple safety check, then if the current screen's state matches the enumerated value RUNNING, we again use polymorphism to pass the Draw call down the object line: void IGame::Draw() { //For safety glViewport(0, 0, m_window.GetScreenWidth(), m_window.GetScreenHeight()); //Check if we have a screen and that the screen is running if (m_currentScreen && m_currentScreen->GetScreenState() == ScreenState::RUNNING) { m_currentScreen->Draw(); } } The last function we need to implement is the OnSDLEvent function. Like I mention in the definition section of this class, we will use this function to connect our input manger system to the SDL built in event system. Every key press or mouse movement is handled as an event. Based on the type of event that has occurred, we again use a Switch case statement to create a simple finite state machine. void IGame::OnSDLEvent(SDL_Event & event) { switch (event.type) { case SDL_QUIT: m_isRunning = false; break; case SDL_MOUSEMOTION: m_inputManager.SetMouseCoords((float)event.motion.x, (float)event.motion.y); break; case SDL_KEYDOWN: m_inputManager.KeyPress(event.key.keysym.sym); break; case SDL_KEYUP: m_inputManager.KeyRelease(event.key.keysym.sym); break; case SDL_MOUSEBUTTONDOWN: m_inputManager.KeyPress(event.button.button); break; case SDL_MOUSEBUTTONUP: m_inputManager.KeyRelease(event.button.button); break; } } } Well, that takes care of the IGame interface. With this created, we can now create a new project that can utilize this and other interfaces in the example engine to create a game and initialize it all with just a few lines of code: #pragma once #include <BookEngine/IMainGame.h> #include "GamePlayScreen.h" class App : public BookEngine::IGame { public: App(); ~App(); virtual void OnInit() override; virtual void OnExit() override; virtual void AddScreens() override; private: std::unique_ptr<GameplayScreen> m_gameplayScreen = nullptr; }; The highlights to note here are that, one, the App class inherits from the BookEngine::IGame interface and two, we have all the necessary overrides that the inherited class requires. Next, if we take a look at the main.cpp file, the entry point for our application, you will see the simple commands to set up and kick off all the amazing things our interfaces, managers, and helpers abstract for us: #include <BookEngine/IMainGame.h> #include "App.h" int main(int argc, char** argv) { App app; app.Run(); return 0; } As you can see, this is far simpler to type out every time we want to create a new project than having to recreate the framework constantly from scratch. To see the output of the framework, build the BookEngine project, then build and run the example project. On Windows, the example project when run will look like the following: On macOS, the example project when run will look like the following: Summary In this article, we covered quite a bit. We took a look at the different methods of using object oriented programming and polymorphism to create a reusable structure for all your game projects. We walked through the differences in Helper, Managers, and Interfaces classes with examples from real code. Resources for Article: Further resources on this subject: Game Development Using C++ [article] C++, SFML, Visual Studio, and Starting the first game [article] Common Game Programming Patterns [article]
Read more
  • 0
  • 0
  • 2045

article-image-deploying-first-container
Packt
11 Apr 2017
11 min read
Save for later

Deploying First Container

Packt
11 Apr 2017
11 min read
In this article by Srikant Machiraju, author of the book Learning Windows Server Containers, we will get acquainted with containers and containerization. Containerization helps you build software in layers, containers inspire distributed development, packaging, and publishing in the form of containers. Developers or IT administrators just have to choose a BaseOS Image, create customized layers as per their requirements, and distribute using Public or Private Repositories.Microsoft and Docker together have provided an amazing toolset that helps you build and deploy containers within no time. It is very easy to setup a dev/test environment as well. Microsoft Windows Server Operating System or Windows 10 Desktop OS comes with plug and play features for running Windows Server containers or Hyper-V Containers.Docker Hub,a public repository for images,serves as a huge catalogue of customized images built by community or docker enthusiasts. The images on DockerHub are freely available for anyone to download, customize,and distribute images. In this article, we will learn how to create and configure container development environments. The following are a few more concepts that you will learn in this article: Preparing Windows Server Containers Environment Pulling images from Docker Hub Installing Base OS Images (For more resources related to this topic, see here.) Preparing Development Environment In order to start creating Windows Containers,you need an instance of Windows Server 2016 or Windows 10 Enterprise/Professional Edition (with Anniversary Update). Irrespective of the environment, the PowerShell/Docker commands described in thisarticlefor creating and packaging containers/imagesare the same.The following are the options we have to setup a windows server container development environment: Windows 10: Using Windows 10 Enterprise or Professional Edition with Anniversary update you can create Hyper-V Containers by enabling the containers role. Docker or PowerShell can be used to manage the containers. We will learn how to configure the Windows 10 environment in the following section. Important: Windows 10 only supports Hyper-V Containers created using NanoServer Base OS Image; it does not support Windows Server Containers. Windows Server 2016: There are two options for working with containers on Windows Server 2016: You can download the Windows Server 2016 ISO from here (https://www.microsoft.com/en-in/evalcenter/evaluate-windows-server-technical-preview) and install it on a virtual machine running on Hyper-V or Virtual Box.For running Windows Server 2016 the host machine should have Hyper-V virtualization enabled.Additionally, for the containers to access theInternet, ensure that thenetwork is sharable between the host and Hyper-VVMs. Windows Azure provides a readymade instance of Windows Server 2016 with Containers configured. This so far is the easiest option available. In this article,I will be using Windows Server 2016 with Containers enabled on Azure to create and manageWindow Server Containers. Windows Server 2016 is still in preview and the latest version available at the time of writing is Technical Preview 5. Containerson Windows 10 The following steps explain how to setup a dev/test environment on Windows 10 for learning container development using Hyper-V Containers. Before continuing further, ensure thatyou're running Windows 10 Professional/Enterprise version with anniversary update. For validating Windows Edition on Windows 10,click Start and type This PC, andthen right-click on This PC and click on Properties. Check the Windows Edition section for the Windows 10 edition. If your PC shows Windows 10 Enterprise or Professional, you can download and install the Anniversary update from here (https://support.microsoft.com/en-us/help/12387/windows-10-update-history). If you do not have any of the above, please proceed to the following section, which explains how to work with containers using Windows Server 2016 environment on Azure and on-premises. Follow these steps to configure Hyper-V containers on Windows 10: Click on Start Menu and type powershell. Right-click powershell CLI (Command Line Interface) and Run as administrator. Run the following command to install the containers feature on Windows 10: Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All Run the following command to install the Hyper-V containers feature on Windows 10. We will only be using Windows Server Containers in this article: Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All Restart the PC by running the following command: Restart-Computer –Force Run the following command to update the registry settings: Set-ItemProperty -Path 'HKLM:SOFTWAREMicrosoftWindows NTCurrentVersionVirtualizationContainers' -Name VSmbDisableOplocks -Type DWord -Value 1 -Force Although PowerShell can be used to manage and run containers, Docker commands give a full wealth of options for container management. Microsoft PowerShell support for Windows Container development is still a work in progress, so we will mix and match PowerShell and Docker as per the scenarios. Run the following set of steps one by one to install and configure Docker on Windows 10: Invoke-WebRequest"https://master.dockerproject.org/windows/amd64/docker-1.13.0-dev.zip" -OutFile"$env:TEMPdocker-1.13.0-dev.zip" –UseBasicParsing Expand-Archive -Path "$env:TEMPdocker-1.13.0-dev.zip" -DestinationPath $env:ProgramFiles $env:path += ";c:program filesdocker" [Environment]::SetEnvironmentVariable("Path", $env:Path + ";C:Program FilesDocker", [EnvironmentVariableTarget]::Machine) dockerd --register-service Start the Docker Service by running the following command: Start-Service docker In order to develop windows server containers, we need any Windows Base OS Image,such as windowsservercore or nanoserver. Since Windows 10 supports nanoservers only, run the following command to download thenanoservercorebase OS image. The following command might take some time depending on your bandwidth; it downloadsand extracts thenanoserver base OS image, which is 970 MB in size approximately: docker pull microsoft/nanoserver Important: At the time of writing, Windows 10 can only run Nano Server Images. Even though the windowsservercore image gets download successfully, running containers using this image will fail due to incompatibility with the OS. Ensure that the images are successfully downloaded by running the following command: docker images Windows Server Containers On-Premise This section will help you to download and install Windows Server 2016 on a Virtual Machine using a hosted virtualization software such as Hyper-V or Virtual Box. Windows Server 2016 comes with two installation options, Windows Server 2016 Core and Windows Server 2016 Full feature, as shown in the following screenshot: Windows Server 2016 Core is a No GUI version of Windows with minimalistic server features and bare minimum size, whereas thefull version is the traditional Server Operating System and it comes with all features installed. No matter which installation option you choose, you will be able to install and run Windows Server Containers. You cannot change the NoGUI version to full version post installation, so make sure you are choosing the right option during installation. You can download the Windows Server ISO from here (https://www.microsoft.com/en-in/evalcenter/evaluate-windows-server-technical-preview, make sure you copy the Activation Keyas well from here) and setup ahostedvirtualization software such as Virtual Box or Hyper-V. Once you start installing from the ISO you will be presented with the following screen, which lets you select full installation with Desktop experience or Just the core with Command Line Interface. Windows Server Containers on Azure On Azure we are going to use an image that comes preinstalled with the containers feature. You can also start with plain Windows Server 2016 on Azure and install the Windows Containers role from Add/Remove Windows Features and then install Docker Engine or use the steps mentioned in Containers on Windows 10. In this article, we are going to create Windows Server 2016 Virtual Machine on Microsoft Azure. The name of the image on Windows Azure is Windows Server 2016 with Containers Tech Preview 5. Microsoft Azure does not support Hyper-V containers on Azure. However, you can deploy these container types when running WS 2016 on premises or on Windows 10. Note:For creating a VMon Azure you would need an Azure account. Microsoft provides a free account for beginners to learn and practice Azure for 30 days/$200. More details regarding creating a free account can be found here (https://azure.microsoft.com/en-in/free/). Container options on WS 2016 TP5 Windows Server 2016 supports two types of containers: Windows Server Containers and Hyper-V Containers. You can run any of these Containers on Windows Server 2016. But for running Hyper-V Containers we would need a container hostthat supports Hyper-V Nested virtualization enabled,which is not mandatory for Windows Server containers. Create Windows Server 2016 TP5 on Azure Azure VMs can be created using a variety of options such as Management Portal (https://manage.windowsazure.com), PowerShell,or Azure CLI. We will be using the new Azure Management Portal (codename Ibiza) for creating an Azure VM. Please follow these steps to create a new Windows Server 2016 Containers Virtual Machine: Login to the Azure Management portal, https://portal.azure.com. Click on +New and select Virtual Machines. Click on See All to get a full list of Virtual Machines. Search using text Windows Server 2016 (with double quotes for full phrase search). You will find three main flavors of Windows Server 2016, as shown in the following screenshot: Click Windows Server 2016with Containers Technical Preview 5 and then click Create on the new blade. Select Resource Manager as thedeployment model. Fill in the parameters as follows: Basic settings:    Name: The name of the Virtual Machine or the host name.    VM Disk Type:SSD (Solid State Disk).    User name: Administrator account username of the machine. This will be used while remotely connecting to the machine using RDP.    Password: Administrator password.    Confirm Password: Repeat administrator password.    Subscription: Select the Azure Subscription to be used to create the machine.    Resource Group: Resource group is a group name for logically grouping resources of a single project. You can use an existing resource group or create a new one.    Location: The geographical location of the Azure Data Center. Choose the nearest available to you. Click Ok. Select Size and click on DS3_V2. For this exercise,we will be using DS3_V2 Standard, which comes with four cores and 14 GB memory. SelectDS3_V2 Standard and click Select: Click on Settings. The Settings section will have most of the parameters pre-filled with defaults. You can leave the defaults unless you want to change anything and then click OK. Check a quick summary of the selections made using the summary tab and click OK. Azure starts creating the VM. The progress will be shown on a new tile added to the dashboard screen,as shown in the following screenshot: Azure might take a couple of minutes or more to create the Virtual Machine and configure extensions. You can check the status on the newly created dashboard tile. Installing Base OS Images and Verifying Installation The following steps will explain how to connect to an Azure VM and verify if the machine is ready for windows server container development: Once the VM is created and running, the status of the VM on the tile will be shown as Running: We can connect to Azure VMs primarily in two ways using RDP and Remote PowerShell. In this sample, we will be using RDP to connect to the Azure VM. Select the tile and click on the Connect icon. This downloads a remote desktop client to your local machine, as shown in the following screenshot: Note: If you are using any browser other than IE 10 or Edge,please check for the .rdp file in the respective downloads folder. Double-click the.rdp file and click Connect to connect to the VM. Enter the Username and Password used while creating the VM to login to the VM. Ignore the Security Certificate Warning and click on Yes. Ensure that the containers feature is installed by running the following command. This should show the Installed State of containers,as shown in the following screenshot: Get-WindowsFeature -Name Containers Ensure that thedocker client and engine is installed using the following commands on PowerShell CLI and verify the version: Docker version Docker Info provides the current state of thedocker daemon,such as the Number of Containers running, paused or stopped, the number of images, and so on, as shown in the following screenshot: Summary In this article, we have learnt how to create and configure windows server container and Hyper-V environment on Windows 10 or Windows Server 2016. Windows 10 professional or Enterprise with Anniversary update can be used to run Hyper-V Containers only. Resources for Article:  Further resources on this subject: Understanding Container Scenarios and Overview of Docker [article] HTML5: Generic Containers [article] OpenVZ Container Administration [article]
Read more
  • 0
  • 0
  • 2239
Modal Close icon
Modal Close icon