Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-qt-style-sheets
Packt
29 Sep 2016
26 min read
Save for later

QT Style Sheets

Packt
29 Sep 2016
26 min read
In this article by Lee Zhi Eng, author of the book, Qt5 C++ GUI Programming Cookbook, we will see how Qt allows us to easily design our program's user interface through a method which most people are familiar with. Qt not only provides us with a powerful user interface toolkit called Qt Designer, which enables us to design our user interface without writing a single line of code, but it also allows advanced users to customize their user interface components through a simple scripting language called Qt Style Sheets. (For more resources related to this topic, see here.) In this article, we will cover the following recipes: Using style sheets with Qt Designer Basic style sheets customization Creating a login screen using style sheet Use style sheets with Qt Designer In this example, we will learn how to change the look and feel of our program and make it look more professional by using style sheets and resources. Qt allows you to decorate your GUIs (Graphical User Interfaces) using a style sheet language called Qt Style Sheets, which is very similar to CSS (Cascading Style Sheets) used by web designers to decorate their websites. How to do it… The first thing we need to do is open up Qt Creator and create a new project. If this is the first time you have used Qt Creator, you can either click the big button that says New Project with a + sign, or simply go to File | New File or New Project. Then, select Application under the Project window and select Qt Widgets Application. After that, click the Choose button at the bottom. A window will then pop out and ask you to insert the project name and its location. Once you're done with that, click Next several times and click the Finish button to create the project. We will just stick to all the default settings for now. Once the project is created, the first thing you will see is the panel with tons of big icons on the left side of the window which is called the Mode Selector panel; we will discuss this more later in the How it works section. Then, you will also see all your source files listed on the Side Bar panel which is located right next to the Mode Selector panel. This is where you can select which file you want to edit, which, in this case, is mainwindow.ui because we are about to start designing the program's UI! Double click mainwindow.ui and you will see an entirely different interface appearing out of nowhere. Qt Creator actually helped you to switch from the script editor to the UI editor (Qt Designer) because it detected .ui extension on the file you're trying to open. You will also notice that the highlighted button on the Mode Selector panel has changed from the Edit button to the Design button. You can switch back to the script editor or change to any other tools by clicking one of the buttons located at the upper half of the Mode Selector panel. Let's go back to the Qt Designer and look at the mainwindow.ui file. This is basically the main window of our program (as the file name implies) and it's empty by default, without any widget on it. You can try to compile and run the program by pressing the Run button (green arrow button) at the bottom of the Mode Selector panel, and you will see an empty window popping out once the compilation is complete: Now, let's add a push button to our program's UI by clicking on the Push Button item in the widget box (under the Buttons category) and drag it to your main window in the form editor. Then, keep the push button selected and now you will see all the properties of this button inside the property editor on the right side of your window. Scroll down to somewhere around the middle and look for a property called styleSheet. This is where you apply styles to your widget, which may or may not inherit to its children or grandchildren recursively depending on how you set your style sheet. Alternatively, you can also right click on any widget in your UI at the form editor and select Change Style Sheet from the pop up menu. You can click on the input field of the styleSheet property to directly write the style sheet code, or click on the … button besides the input field to open up the Edit Style Sheet window which has a bigger space for writing longer style sheet code. At the top of the window you can find several buttons such as Add Resource, Add Gradient, Add Color, and Add Font that can help you to kick-start your coding if you don't remember the properties' names. Let's try to do some simple styling with the Edit Style Sheet window. Click Add Color and choose color. Pick a random color from the color picker window, let's say, a pure red color. Then click Ok. Now, you will see a line of code has been added to the text field on the Edit Style Sheet window, which in my case is as follows: color: rgb(255, 0, 0); Click the Ok button and now you will see the text on your push button has changed to a red color. How it works Let's take a bit of time to get ourselves familiar with Qt Designer's interface before we start learning how to design our own UI: Menu bar: The menu bar houses application-specific menus which provide easy access to essential functions such as create new projects, save files, undo, redo, copy, paste, and so on. It also allows you to access development tools that come with Qt Creator, such as the compiler, debugger, profiler, and so on. Widget box: This is where you can find all the different types of widgets provided by Qt Designer. You can add a widget to your program's UI by clicking one of the widgets from the widget box and dragging it to the form editor. Mode selector: The mode selector is a side panel that places shortcut buttons for easy access to different tools. You can quickly switch between the script editor and form editor by clicking the Edit or Design buttons on the mode selector panel which is very useful for multitasking. You can also easily navigate to the debugger and profiler tools in the same speed and manner. Build shortcuts: The build shortcuts are located at the bottom of the mode selector panel. You can build, run, and debug your project easily by pressing the shortcut buttons here. Form editor: Form editor is where you edit your program's UI. You can add different widgets to your program by selecting a widget from the widget box and dragging it to the form editor. Form toolbar: From here, you can quickly select a different form to edit, click the drop down box located above the widget box and select the file you want to open with Qt Designer. Beside the drop down box are buttons for switching between different modes for the form editor and also buttons for changing the layout of your UI. Object inspector: The object inspector lists out all the widgets within your current .ui file. All the widgets are arranged according to its parent-child relationship in the hierarchy. You can select a widget from the object inspector to display its properties in the property editor. Property editor: Property editor will display all the properties of the widget you selected either from the object inspector window or the form editor window. Action Editor and Signals & Slots Editor: This window contains two editors – Action Editor and the Signals & Slots Editor which can be accessed from the tabs below the window. The action editor is where you create actions that can be added to a menu bar or toolbar in your program's UI. Output panes: Output panes consist of several different windows that display information and output messages related to script compilation and debugging. You can switch between different output panes by pressing the buttons that carry a number before them, such as 1-Issues, 2-Search Results, 3-Application Output, and so on. There's more… In the previous section, we discussed how to apply style sheets to Qt widgets through C++ coding. Although that method works really well, most of the time the person who is in charge of designing the program's UI is not the programmer himself, but a UI designer who specializes in designing user-friendly UI. In this case, it's better to let the UI designer design the program's layout and style sheet with a different tool and not mess around with the code. Qt provides an all-in-one editor called the Qt Creator. Qt Creator consists of several different tools, such as script editor, compiler, debugger, profiler, and UI editor. The UI editor, which is also called the Qt Designer, is the perfect tool for designers to design their program's UI without writing any code. This is because Qt Designer adopted the What-You-See-Is-What-You-Get approach by providing accurate visual representation of the final result, which means whatever you design with Qt Designer will turn out exactly the same when the program is compiled and run. The similarities between Qt Style Sheets and CSS are as follows: CSS: h1 { color: red; background-color: white;} Qt Style Sheets: QLineEdit { color: red; background-color: white;} As you can see, both of them contain a selector and a declaration block. Each declaration contains a property and a value, separated by a colon. In Qt, a style sheet can be applied to a single widget by calling QObject::setStyleSheet() function in C++ code. For example: myPushButton->setStyleSheet("color : blue"); The preceding code will turn the text of a button with the variable name myPushButton to a blue color. You can also achieve the same result by writing the declaration in the style sheet property field in Qt Designer. We will discuss more about Qt Designer in the next section. Qt Style Sheets also supports all the different types of selectors defined in CSS2 standard, including Universal selector, Type selector, Class selector, ID selector, and so on, which allows us to apply styling to a very specific individual or group of widgets. For instance, if we want to change the background color of a specific line edit widget with the object name usernameEdit, we can do this by using an ID selector to refer to it: QLineEdit#usernameEdit { background-color: blue } To learn about all the selectors available in CSS2 (which are also supported by Qt Style Sheets), please refer to this document: http://www.w3.org/TR/REC-CSS2/selector.html. Basic style sheet customization In the previous example, you learned how to apply a style sheet to a widget with Qt Designer. Let's go crazy and push things further by creating a few other types of widgets and change their style properties to something bizarre for the sake of learning. This time, however, we will not apply the style to every single widget one by one, but we will learn to apply the style sheet to the main window and let it inherit down the hierarchy to all the other widgets so that the style sheet is easier to manage and maintain in long run. How to do it… First of all, let's remove the style sheet from the push button by selecting it and clicking the small arrow button besides the styleSheet property. This button will revert the property to the default value, which in this case is the empty style sheet. Then, add a few more widgets to the UI by dragging them one by one from the widget box to the form editor. I've added a line edit, combo box, horizontal slider, radio button, and a check box. For the sake of simplicity, delete the menu bar, main toolbar, and the status bar from your UI by selecting them from the object inspector, right click, and choose Remove. Now your UI should look something similar to this: Select the main window either from the form editor or the object inspector, then right click and choose Change Stylesheet to open up the Edit Style Sheet. border: 2px solid gray; border-radius: 10px; padding: 0 8px; background: yellow; Now what you will see is a completely bizarre-looking UI with everything covered in yellow with a thick border. This is because the precedingstyle sheet does not have any selector, which means the style will apply to the children widgets of the main window all the way down the hierarchy. To change that, let's try something different: QPushButton { border: 2px solid gray; border-radius: 10px; padding: 0 8px; background: yellow; } This time, only the push button will get the style described in the preceding code, and all other widgets will return to the default styling. You can try to add a few more push buttons to your UI and they will all look the same: This happens because we specifically tell the selector to apply the style to all the widgets with the class called QPushButton. We can also apply the style to just one of the push buttons by mentioningit's name in thestyle sheet, like so: QPushButton#pushButton_3 { border: 2px solid gray; border-radius: 10px; padding: 0 8px; background: yellow; } Once you understand this method, we can add the following code to the style sheet : QPushButton { color: red; border: 0px; padding: 0 8px; background: white; } QPushButton#pushButton_2 { border: 1px solid red; border-radius: 10px; } QPushButton#pushButton_3 { border: 2px solid gray; border-radius: 10px; padding: 0 8px; background: yellow; } What it does is basically change the style of all the push buttons as well as some properties of a specific button named pushButton_2. We keep the style sheet of pushButton_3 as it is. Now the buttons will look like this: The first set of style sheet will change all widgets of QPushButton type to a white rectangular button with no border and red text. Then the second set of style sheets changes only the border of a specific QPushButton widget by name of pushButton_2. Notice that the background color and text color of pushButton_2 remain as white and red color respectively because we didn't override them in the second set of style sheet, hence it will follow back the style described in the first set of style sheet since it's applicable to all QPushButton type widgets. Do notice that the text of the third button has also changed to red because we didn't describe the color property in the third set of style sheet. After that, create another set of style using the universal selector, like so: * { background: qradialgradient(cx: 0.3, cy: -0.4, fx: 0.3, fy: -0.4, radius: 1.35, stop: 0 #fff, stop: 1 #888); color: rgb(255, 255, 255); border: 1px solid #ffffff; } The universal selector will affect all the widgets regardless of their type. Therefore, the preceding style sheet will apply a nice gradient color to all the widgets' background as well as setting their text as white color and giving them a one-pixel solid outline which is also in white color. Instead of writing the name of the color (that is, white), we can also use the rgb function (rgb(255, 255, 255)) or hex code (#ffffff) to describe the color value. Just as before, the preceding style sheet will not affect the push buttons because we have already given them their own styles which will override the general style described in the universal selector. Just remember that in Qt, the style which is more specific will ultimately be used when there is more than one style having influence on a widget. This is how the UI will look like now: How it works If you are ever involved in web development using HTML and CSS, Qt's style sheet works exactly the same way as CSS. Style Sheet provides the definitions for describing the presentation of the widgets – what the colors are for each element in the widget group, how thick the border should be, and so on and so forth. If you specify the name of the widget to the style sheet, it will change the style of a particular push button widget with the name you provide. All the other widgets will not be affected and will remain as the default style. To change the name of a widget, select the widget either from the form editor or the object inspector and change the property called objectName in the property window. If you have used the ID selector previously to change the style of the widget, changing its object name will break the style sheet and lose the style. To fix this problem, simply change the object name in the style sheet as well. Creating a login screen using style sheet Next, we will learn how to put all the knowledge we learned in the previous example together and create a fake graphical login screen for an imaginary operating system. Style sheet is not the only thing you need to master in order to design a good UI. You will also need to learn how to arrange the widgets neatly using the layout system in Qt Designer. How to do it… The first thing we need to do is design the layout of the graphical login screen before we start doing anything. Planning is very important in order to produce good software. The following is a sample layout design I made to show you how I imagine the login screen will look. Just a simple line drawing like this is sufficient as long as it conveys the message clearly: Now that we know exactly how the login screen should look, let's go back to Qt Designer again. We will be placing the widgets at the top panel first, then the logo and the login form below it. Select the main window and change its width and height from 400 and 300 to 800 and 600 respectively because we'll need a bigger space in which to place all the widgets in a moment. Click and drag a label under theDisplay Widgets category from the widget box to the form editor. Change the objectName property of the label to currentDateTime and change its Text property to the current date and time just for display purposes, such as Monday, 25-10-2015 3:14 PM. Click and drag a push button under the Buttons category to the form editor. Repeat this process one more time because we have two buttons on the top panel. Rename the two buttons to restartButton and shutdownButton respectively. Next, select the main window and click the small icon button on the form toolbar that says Lay Out Vertically when you mouse-over it. Now you will see the widgets are being automatically arranged on the main window, but it's not exactly what we want yet. Click and drag a horizontal layout widget under the Layouts category to the main window. Click and drag the two push buttons and the text label into the horizontal layout. Now you will see the three widgets being arranged in a horizontal row, but vertically they are located in the middle of the screen. The horizontal arrangement is almost correct, but the vertical position is totally off. Click and drag a vertical spacer from the Spacers category and place it below the horizontal layout we just created previously (below the red rectangular outline). Now you will see all the widgets are being pushed to the top by the spacer. Now, place a horizontal spacer between the text label and the two buttons to keep them apart. This will make the text label always stick to the left and the buttons align to the right. Set both the Horizontal Policy and Vertical Policy properties of the two buttons to Fixed and set the minimumSize property to 55x55. Then, set the text property of the buttons to empty as we will be using icons instead of text. We will learn how to place an icon in the button widgets in the following section: Now your UI should look similar to this: Next, we will be adding the logo by using the following steps: Add a horizontal layout between the top panel and the vertical spacer to serve as a container for the logo. After adding the horizontal layout, you will find the layout is way too thin in height to be able to add any widget to it. This is because the layout is empty and it's being pushed by the vertical spacer below it into zero height. To solve this problem, we can set its vertical margin (either layoutTopMargin or layoutBottomMargin) to be temporarily bigger until a widget is added to the layout. Next, add a label to the horizontal layout that you just created and rename it to logo. We will learn more about how to insert an image into the label to use it as logo in the next section. For now, just empty out the text property and set both its Horizontal Policy and Vertical Policy properties to Fixed. Then, set the minimumSize property to 150x150. Set the vertical margin of the layout back to zero if you haven't done so. The logo now looks invisible, so we will just place a temporary style sheet to make it visible until we add an image to it in the next section. The style sheet is really simple: border: 1px solid; Now your UI should look something similar to this: Now let's create the login form by using the following steps: Add a horizontal layout between the logo's layout and the vertical spacer. Just as we did previously, set the layoutTopMargin property to a bigger number (that is,100) so that you can add a widget to it more easily. After that, add a vertical layout inside the horizontal layout you just created. This layout will be used as a container for the login form. Set its layoutTopMargin to a number lower than that of the horizontal layout (that is, 20) so that we can place widgets in it. Next, right click the vertical layout you just created and choose Morph into -> QWidget. The vertical layout is now being converted into an empty widget. This step is essential because we will be adjusting the width and height of the container for the login form. A layout widget does not contain any properties for width and height, but only margins, due to the fact that a layout will expand toward the empty space surrounding it, which does make sense, considering that it does not have any size properties. After you have converted the layout to a QWidget object, it will automatically inherit all the properties from the widget class and so we are now able to adjust its size to suit our needs. Rename the QWidget object which we just converted from the layout to loginForm and change both its Horizontal Policy and Vertical Policy properties to Fixed. Then, set the minimumSize property to 350x200. Since we already placed the loginForm widget inside the horizontal layout, we can now set its layoutTopMargin property back to zero. Add the same style sheet as the logo to the loginForm widget to make it visible temporarily, except this time we need to add an ID selector in front so that it will only apply the style to loginForm and not its children widgets: #loginForm { border: 1px solid; } Now your UI should look something like this: We are not done with the login form yet. Now that we have created the container for the login form, it's time to put more widgets into the form: Place two horizontal layouts into the login form container. We need two layouts as one for the username field and another for the password field. Add a label and a line edit to each of the layouts you just added. Change the text property of the upper label to Username: and the one below as Password:. Then, rename the two line edits as username and password respectively. Add a push button below the password layout and change its text property to Login. After that, rename it as loginButton. You can add a vertical spacer between the password layout and the login button to distance them slightly. After the vertical spacer has been placed, change its sizeType property to Fixed and change the Height to 5. Now, select the loginForm container and set all its margins to 35. This is to make the login form look better by adding some space to all its sides. You can also set the Height property of the username, password, and loginButton widgets to 25 so that they don't look so cramped. Now your UI should look something like this: We're not done yet! As you can see the login form and the logo are both sticking to the top of the main window due to the vertical spacer below them. The logo and the login form should be placed at the center of the main window instead of the top. To fix this problem use the following steps: Add another vertical spacer between the top panel and the logo's layout. This way it will counter the spacer at the bottom which balances out the alignment. If you think that the logo is sticking too close to the login form, you can also add a vertical spacer between the logo's layout and the login form's layout. Set its sizeType property to Fixed and the Height property to 10. Right click the top panel's layout and choose Morph into -> QWidget. Then, rename it topPanel. The reason why the layout has to be converted into QWidget because we cannot apply style sheets to a layout, as it doesn't have any properties other than margins. Currently you can see there is a little bit of margin around the edges of the main window – we don't want that. To remove the margins, select the centralWidget object from the object inspector window, which is right under the MainWindow panel, and set all the margin values to zero. At this point, you can run the project by clicking the Run button (withgreen arrow icon) to see what your program looks like now. If everything went well, you should see something like this: After we've done the layout, it's time for us to add some fanciness to the UI using style sheets! Since all the important widgets have been given an object name, it's easier for us to apply the style sheets to it from the main window, since we will only write the style sheets to the main window and let them inherit down the hierarchy tree. Right click on MainWindow from the object inspector window and choose Change Stylesheet. Add the following code to the style sheet: #centralWidget { background: rgba(32, 80, 96, 100); } Now you will see that the background of the main window changes its color. We will learn how to use an image for the background in the next section so the color is just temporary. In Qt, if you want to apply styles to the main window itself, you must apply it to its central widget instead of the main window itself because the window is just a container. Then, we will add a nice gradient color to the top panel: #topPanel { background-color: qlineargradient(spread:reflect, x1:0.5, y1:0, x2:0, y2:0, stop:0 rgba(91, 204, 233, 100), stop:1 rgba(32, 80, 96, 100)); } After that, we will apply black color to the login form and make it look semi-transparent. After that, we will also make the corners of the login form container slightly rounded by setting the border-radius property: #loginForm { background: rgba(0, 0, 0, 80); border-radius: 8px; } After we're done applying styles to the specific widgets, we will apply styles to the general types of widgets instead: QLabel { color: white; } QLineEdit { border-radius: 3px; } The style sheets above will change all the labels' texts to a white color, which includes the text on the widgets as well because, internally, Qt uses the same type of label on the widgets that have text on it. Also, we made the corners of the line edit widgets slightly rounded. Next, we will apply style sheets to all the push buttons on our UI: QPushButton { color: white; background-color: #27a9e3; border-width: 0px; border-radius: 3px; } The preceding style sheet changes the text of all the buttons to a white color, then sets its background color to blue, and makes its corners slightly rounded as well. To push things even further, we will change the color of the push buttons when we mouse-over it, using the keyword hover: QPushButton:hover { background-color: #66c011; } The preceding style sheet will change the background color of the push buttons to green when we mouse-over. We will talk more about this in the following section. You can further adjust the size and margins of the widgets to make them look even better.Remember to remove the border line of the login form by removing the style sheet which we applied directly to it earlier. Now your login screen should look something like this: How it works This example focuses more on the layout system of Qt. The Qt layout system provides a simple and powerful way of automatically arranging child widgets within a widget to ensure that they make good use of the available space. The spacer items used in the preceding example help to push the widgets contained in a layout outward to create spacing along the width of the spacer item. To locate a widget to the middle of the layout, put two spacer items to the layout, one on the left side of the widget and another on the right side of the widget. The widget will then be pushed to the middle of the layout by the two spacers. Summary So in this article we saw how Qt allows us to easily design our program's user interface through a method which most people are familiar with. We also covered the toolkit, Qt Designer, which enables us to design our user interface without writing a single line of code. Finally, we saw how to create a login screen. For more information on Qt5 and C++ you can check other books by Packt, mentioned as follows: Qt 5 Blueprints: https://www.packtpub.com/application-development/qt-5-blueprints Boost C++ Application Development Cookbook: https://www.packtpub.com/application-development/boost-c-application-development-cookbook Learning Boost C++ Libraries: https://www.packtpub.com/application-development/learning-boost-c-libraries Resources for Article: Further resources on this subject: OpenCart Themes: Styling Effects of jQuery Plugins [article] Responsive Web Design [article] Gearing Up for Bootstrap 4 [article]
Read more
  • 0
  • 0
  • 31767

article-image-deep-learning-torch
Preetham Sreenivas
29 Sep 2016
10 min read
Save for later

Deep Learning with Torch

Preetham Sreenivas
29 Sep 2016
10 min read
Torch is a scientific computing framework built on top of Lua[JIT]. The nn package and the ecosystem around it provide a very powerful framework for building deep learning models, striking a perfect balance between speed and flexibility. It is used at Facebook AI Research(FAIR), Twitter Cortex, DeepMind, Yann LeCun's group at NYU, Fei-Fei Li's at Stanford, and many more industrial and academic labs. If you are like me, and don't like writing equations for backpropagation every time you want to try a simple model, Torch is a great solution. With Torch, you can also do pretty much anything you can imagine, whether that is writing custom loss functions, dreaming up an arbitrary acyclic graph network, or even using multiple GPUs or loading pre-trained models on imagenet from caffe model-zoo (yes, you can load models trained in caffe with a single line). Without further ado, let's jump right into the awesome world of deep learning. Prerequisites Some knowledge of deep learning—A Primer, Bengio's deep learning book, Hinton's Coursera course. A bit of Lua. Its syntax is very C-like and can be picked up fairly quickly if you know Python or JavaScript—Learn Lua in 15 minutes, Torch For Numpy Users. A machine with Torch installed since this is intended to be hands-on. On Ubuntu 12+ and Mac OS X, installing Torch looks like this: # in a terminal, run the commands WITHOUT sudo $ git clone https://github.com/torch/distro.git ~/torch --recursive $ cd ~/torch; bash install-deps; $ ./install.sh # On Linux with bash $ source ~/.bashrc # On OSX or in Linux with no bash. $ source ~/.profile Once you’ve installed Torch, you can run a Torch script using: $ th script.lua # alternatively you can fire up a terminal torch interpreter using th -i $ th -i # and run multiple scripts one by one, the variables will be accessible to other scripts > dofile 'script1.lua' > dofile 'script2.lua' > print(variable) -- variable from either of these scripts. The sections below are very code intensive, but you can run these commands from Torch's terminal interpreter. $th -i Building a Model: The Basics A module is the basic building block of any Torch model. It has forward and backward methods for forward and backward passes of backpropagation. You can combine them using containers, and of course, calling forward and backward on containers propagates inputs and gradients correctly. -- A simple mlp model with sigmoids require 'nn' linear1 = nn.Linear(100,10) -- A linear layer Module linear2 = nn.Linear(10,2) -- You can combine modulues using containers, sequential is the most used one model = nn.Sequential() -- A container model:add(linear1) model:add(nn.Sigmoid()) model:add(linear2) model:add(nn.Sigmoid()) -- the forward step input = torch.rand(100) target = torch.rand(2) output = linear:forward(input) Now we need a criterion to measure how well our model is performing, in other words, a loss function. nn.Criterion is the abstract class that all loss functions inherit. It provides forward and backward methods, computing loss and gradients respectively. Torch provides most of the commonly used criterions out of the box. It isn't much of an effort to write your own either. criterion = nn.MSECriterioin() -- mean squared error criterion loss = criterion:forward(output,target) gradientsAtOutput = criterion:backward(output,target) -- To perform the backprop step, we need to pass these gradients to the backward -- method of the model gradAtInput = model:backward(input,gradientsAtOutput) lr = 0.1 -- learning rate for our model model:updateParameters(lr) -- updates the parameters using the lr parameter. The updateParameters method just subtracts the model parameters by gradients scaled by the learning rate. This is the vanilla stochastic gradient descent. Typically, the updates we do are more complex. For example, if we want to use momentum, we need to keep a track of updates we did in the previous epoch. There are a lot more fancy optimization schemes such as RMSProp, adam, adagrad, and L-BFGS that do more complex things like adapting learning rate, momentum factor, and so on. The optim package provides optimization routines out of the box. Dataset We'll use the German Traffic Sign Recognition Benchmark(GTSRB) dataset. This dataset has 43 classes of traffic signs of varying sizes, illuminations and occlusions. There are 39,000 training images and 12,000 test images. Traffic signs in each of the images are not centered and they have a 10% border around them. I have included a shell script for downloading the data along with the code for this tutorial in this github repo.[1] git clone https://github.com/preethamsp/tutorial.gtsrb.torch.git cd tutorial.gtsrb.torch/datasets bash download_gtsrb.sh Model Let's build a downsized vgg style model with what we've learned. function createModel() require 'nn' nbClasses = 43 local net = nn.Sequential() --[[building block: adds a convolution layer, batch norm layer and a relu activation to the net]]-- function ConvBNReLU(nInputPlane, nOutputPlane) The code in the repo is much more polished than the snippets in the tutorial. It is modular and allows you to change the model and/or datasets easily. -- kernel size = (3,3), stride = (1,1), padding = (1,1) net:add(nn.SpatialConvolution(nInputPlane, nOutputPlane, 3,3, 1,1, 1,1)) net:add(nn.SpatialBatchNormalization(nOutputPlane,1e-3)) net:add(nn.ReLU(true)) end ConvBNReLU(3,32) ConvBNReLU(32,32) net:add(nn.SpatialMaxPooling(2,2,2,2)) net:add(nn.Dropout(0.2)) ConvBNReLU(32,64) ConvBNReLU(64,64) net:add(nn.SpatialMaxPooling(2,2,2,2)) net:add(nn.Dropout(0.2)) ConvBNReLU(64,128) ConvBNReLU(128,128) net:add(nn.SpatialMaxPooling(2,2,2,2)) net:add(nn.Dropout(0.2)) net:add(nn.View(128*6*6)) net:add(nn.Dropout(0.5)) net:add(nn.Linear(128*6*6,512)) net:add(nn.BatchNormalization(512)) net:add(nn.ReLU(true)) net:add(nn.Linear(512,nbClasses)) net:add(nn.LogSoftMax()) return net end The first layer contains three input channels because we're going to pass RGB images (three channels). For grayscale images, the first layer has one input channel. I encourage you to play around and modify the network.[2] There are a bunch of new modules that need some elaboration. The Dropout module randomly deactivates a neuron with some probability. It is known to help generalization by preventing co-adaptation between neurons; that is, a neuron should now depend less on its peer, forcing it to learn a bit more. BatchNormalization is a very recent development. It is known to speed up convergence by normalizing the outputs of a layer to unit gaussian using the statistics of a batch. Let’s use this model and train it. In the interest of brievity, I'll use these constructs directly. The code describing these constructs is in datasets/gtsrb.lua. DataGen:trainGenerator(batchSize) DataGen:valGenerator(batchSize) These provide iterators over batches of train and test data respectively. You'll find that the model code (models/vgg_small.lua) in the repo is different. It is designed to allow you to experiment quickly. Using optim to train the model Using a stochastic gradient descent (sgd) from the optim package to minimize a function f looks like this: optim.sgd(feval, params, optimState) Where: feval: A user-defined function that respects the API: f, df/params = feval(params) params: The current parameter vector (a 1D torch.Tensor) optimState: A table of parameters, and state variables, dependent upon the algorithm Since we are optimizing the loss of the neural network, parameters should be the weights and other parameters of the network. We get these as a flattened 1D tensor using model:getParameters. It also returns a tensor containing the gradients of these parameters. This is useful in creating the feval function above. model = createModel() criterion = nn.ClassNLLCriterion() -- criterion we are optimizing: negative log loss params, gradParams = model:getParameters() local function feval() -- criterion.output stores the latest output of criterion return criterion.output, gradParams end We need to create an optimState table and initialize it with a configuration of our optimizer like learning rate and momentum: optimState = { learningRate = 0.01, momentum = 0.9, dampening = 0.0, nesterov = true, } Now, an update to the model should do the following: Compute the output of the model using model:forward(). Compute the loss and the gradients at output layer using criterion:forward() and criterion:backward() respectively. Update the gradients of the model parameters using model:backward(). Update the model using optim.sgd. -- Forward pass output = model:forward(input) loss = criterion:forward(output, target) -- Backward pass critGrad = criterion:backward(output, target) model:backward(input, critGrad) -- Updates optim.sgd(feval, params, optimState) Note: The order above should be respected, as backward assumes forward was run just before it. Changing this order might result in gradients not being computed correctly. Putting it all together Let's put it all together and write a function that trains the model for an epoch. We'll create a loop that iterates over the train data in batches and updates the model. model = createModel() criterion = nn.ClassNLLCriterion() dataGen = DataGen('datasets/GTSRB/') -- Data generator params, gradParams = model:getParameters() batchSize = 32 optimState = { learningRate = 0.01, momentum = 0.9, dampening = 0.0, nesterov = true, } function train() -- Dropout and BN behave differently during training and testing -- So, switch to training mode model:training() local function feval() return criterion.output, gradParams end for input, target in dataGen:trainGenerator(batchSize) do -- Forward pass local output = model:forward(input) local loss = criterion:forward(output, target) -- Backward pass model:zeroGradParameters() -- clear grads from previous update local critGrad = criterion:backward(output, target) model:backward(input, critGrad) -- Updates optim.sgd(feval, params, optimState) end end The test function is extremely similar, except that we don't need to update the parameters: confusion = optim.ConfusionMatrix(nbClasses) -- to calculate accuracies function test() model:evaluate() -- switch to evaluate mode confusion:zero() -- clear confusion matrix for input, target in dataGen:valGenerator(batchSize) do local output = model:forward(input) confusion:batchAdd(output, target) end confusion:updateValids() local test_acc = confusion.totalValid * 100 print(('Test accuracy: %.2f'):format(test_acc)) end Now that everything is set, you can train your network and print the test accuracies: max_epoch = 20 for i = 1,20 do train() test() end An epoch takes around 30 seconds on a TitanX and gives about 97.7% accuracy after 20 epochs. This is a very basic model and honestly I haven't tried optimizing the parameters much. There are a lot of things that can be done to crank up the accuracies. Try different processing procedures. Experiment with the net structure. Different weight initializations, and learning rate schedules. An Ensemble of different models; for example, train multiple models and take a majority vote. You can have a look at the state of the art on this dataset here. They achieve upwards of 99.5% accuracy using a clever method to boost the geometric variation of CNNs. Conclusion We looked at how to build a basic mlp in Torch. We then moved on to building a Convolutional Neural Network and trained it to solve a real-world problem of traffic sign recognition. For a beginner, Torch/LUA might not be as easy. But once you get a hang of it, you have access to a deep learning framework which is very flexible yet fast. You will be able to easily reproduce latest research or try new stuff unlike in rigid frameworks like keras or nolearn. I encourage you to give it a fair try if you are going anywhere near deep learning. Resources Torch Cheat Sheet Awesome Torch Torch Blog Facebook's Resnet Code Oxford's ML Course Practicals Learn torch from Github repos About the author Preetham Sreenivas is a data scientist at Fractal Analytics. Prior to that, he was a software engineer at Directi.
Read more
  • 0
  • 0
  • 11302

article-image-how-to-apply-themes-sails-applications-part-1
Luis Lobo
29 Sep 2016
8 min read
Save for later

How to Apply Themes to Sails Applications, Part 1

Luis Lobo
29 Sep 2016
8 min read
The Sails Framework is a popular MVC framework that is designed for building practical, production-ready Node.js apps. Themes customize the look and feel of your app, but Sails does not come with a configuration or setting for handling themes by itself. This two-part post shows one of the ways you can set up theming for your Sails application, thus making use of some of Sails’ capabilities. You may have an application that needs to handle theming for different reasons, like custom branding, licensing, dynamic theme configuration, and so on. You can adjust the theming of your application, based on external factors, like patterns in the domain of the site you are browsing. Imagine you have an application that handles deliveries that you customize per client. So, your app renders the default theme when browsed as http://www.smartdelivery.com, but when logged in as a customer, let's say, "Burrito", it changes the domain name as http://burrito.smartdelivery.com. In this series we make use of Less as our language to define our CSS. Sails already handles Less right out of the box. The default Less file is located in /assets/styles/importer.less. We will also use Bootstrap as our base CSS Framework, importing its Less file into our importer.less file. The technique showed here consists of having a base CSS, and a theme CSS that varies according to the host name. Step 1 - Adding Bootstrap to Sails We use Bower to add Bootstrap to our project. First, install it by issuing the following command: npm install bower --save-dev Then, initialize the Bower configuration file. node_modules/bower/bin/bower init This command allows us to configure our bower.json file. Answer the questions asked by bower. ? name sails-themed-application ? description Sails Themed Application ? main file app.js ? keywords ? authors lobo ? license MIT ? homepage ? set currently installed components as dependencies? Yes ? add commonly ignored files to ignore list? Yes ? would you like to mark this package as private which prevents it from being accidentally published to the registry? No { name: 'sails-themed-application', description: 'Sails Themed Application', main: 'app.js', authors: [ 'lobo' ], license: 'MIT', homepage: '', ignore: [ '**/.*', 'node_modules', 'bower_components', 'assets/vendor', 'test', 'tests' ] } This generates a bower.json file in the root of your project. Now we need to tell bower to install everything in a specific directory. Create a file named .bowerrc and put this configuration into it: {"directory" : "assets/vendor"} Finally, install Bootstrap: node_modules/bower/bin/bower install bootstrap --save --production This action creates a folder in assets named vendor, with boostrap inside of it. Since Bootstrap uses JQuery, you also have a jquery folder: ├── api │ ├── controllers │ ├── models │ ├── policies │ ├── responses │ └── services ├── assets │ ├── images │ ├── js │ │ └── dependencies │ ├── styles │ ├── templates │ ├── themes │ └── vendor │ ├── bootstrap │ │ ├── dist │ │ │ ├── css │ │ │ ├── fonts │ │ │ └── js │ │ ├── fonts │ │ ├── grunt │ │ ├── js │ │ ├── less │ │ │ └── mixins │ │ └── nuget │ └── jquery │ ├── dist │ ├── external │ │ └── sizzle │ │ └── dist │ └── src │ ├── ajax │ │ └── var │ ├── attributes │ ├── core │ │ └── var │ ├── css │ │ └── var │ ├── data │ │ └── var │ ├── effects │ ├── event │ ├── exports │ ├── manipulation │ │ └── var │ ├── queue │ ├── traversing │ │ └── var │ └── var ├── config │ ├── env │ └── locales ├── tasks │ ├── config │ └── register └── views We need now to add Bootstrap into our importer. Edit /assets/styles/importer.less and add this instruction at the end of it: @import "../vendor/bootstrap/less/bootstrap.less"; Now you need to tell Sails where to import Bootstrap and JQuery JavaScript files from. Edit /tasks/pipeline.js and add the following code after it loads the sails.io.js file: // Load sails.io before everything else 'js/dependencies/sails.io.js', // <ADD THESE LINES> // JQuery JS 'vendor/jquery/dist/jquery.min.js', // Bootstrap JS 'vendor/bootstrap/dist/js/bootstrap.min.js', // </ADD THESE LINES> Now you have to edit your views layout and pages to use the Bootstrap style. In this series I created an application from scratch, so I have the default views and layouts. In your layout, insert the following line after your tag: <link rel="stylesheet" href="/themes/<%= typeof theme == 'undefined' ? 'default' : theme %>.css"> This loads a second CSS file, which defaults to /themes/default.css, into your views. As a sample, here are the /views/layout.ejs and /views/homepage.ejs I changed (the text under the headings is random text): /views/layout.ejs <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1"> <!-- The above 3 meta tags *must* come first in the head; any other head content must come *after* these tags --> <title><%= typeof title == 'undefined' ? 'Sails Themed Application' : title %></title> <!--STYLES--> <link rel="stylesheet" href="/styles/importer.css"> <!--STYLES END--> <!-- THIS IS WHERE THE THEME CSS IS LOADED --> <link rel="stylesheet" href="/themes/<%= typeof theme == 'undefined' ? 'default' : theme %>.css"> </head> <body> <%- body %> <!--TEMPLATES--> <!--TEMPLATES END--> <!--SCRIPTS--> <script src="/js/dependencies/sails.io.js"></script> <script src="/vendor/jquery/dist/jquery.min.js"></script> <script src="/vendor/bootstrap/dist/js/bootstrap.min.js"></script> <!--SCRIPTS END--> </body> </html> Notice the lines after the <!--STYLES END--> tag. /views/homepage.ejs <nav class="navbar navbar-inverse navbar-fixed-top"> <div class="container"> <div class="navbar-header"> <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#navbar" aria-expanded="false" aria-controls="navbar"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class="navbar-brand" href="#">Project name</a> </div> <div id="navbar" class="navbar-collapse collapse"> <form class="navbar-form navbar-right"> <div class="form-group"> <input type="text" placeholder="Email" class="form-control"> </div> <div class="form-group"> <input type="password" placeholder="Password" class="form-control"> </div> <button type="submit" class="btn btn-success">Sign in</button> </form> </div><!--/.navbar-collapse --> </div> </nav> <!-- Main jumbotron for a primary marketing message or call to action --> <div class="jumbotron"> <div class="container"> <h1>Hello, world!</h1> <p>This is a template for a simple marketing or informational website. It includes a large callout called a jumbotron and three supporting pieces of content. Use it as a starting point to create something more unique.</p> <p><a class="btn btn-primary btn-lg" href="#" role="button">Learn more &raquo;</a></p> </div> </div> <div class="container"> <!-- Example row of columns --> <div class="row"> <div class="col-md-4"> <h2>Heading</h2> <p>Donec id elit non mi porta gravida at eget metus. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus. Etiam porta sem malesuada magna mollis euismod. Donec sed odio dui. </p> <p><a class="btn btn-default" href="#" role="button">View details &raquo;</a></p> </div> <div class="col-md-4"> <h2>Heading</h2> <p>Donec id elit non mi porta gravida at eget metus. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus. Etiam porta sem malesuada magna mollis euismod. Donec sed odio dui. </p> <p><a class="btn btn-default" href="#" role="button">View details &raquo;</a></p> </div> <div class="col-md-4"> <h2>Heading</h2> <p>Donec sed odio dui. Cras justo odio, dapibus ac facilisis in, egestas eget quam. Vestibulum id ligula porta felis euismod semper. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus.</p> <p><a class="btn btn-default" href="#" role="button">View details &raquo;</a></p> </div> </div> <hr> <footer> <p>&copy; 2015 Company, Inc.</p> </footer> </div> <!-- /container --> You can now lift Sails and see your Bootstrapped Sails application. Now that we have our Bootstrapped Sails app set up, in Part 2 we will compile our theme’s CSS and the necessary Less files, and we will set bup the theme Sails hook to complete our application. About the author Luis Lobo Borobia is the CTO at FictionCity.NET, is a mentor and advisor, independent software engineer consultant, and conference speaker. He has a background as a software analyst and designer, creating, designing, and implementing Software products and solutions, frameworks, and platforms for several kinds of industries. In the last years he has focused on research and development for the Internet of Things, using the latest bleeding-edge software and hardware technologies available.
Read more
  • 0
  • 1
  • 9679

article-image-directory-services
Packt
29 Sep 2016
11 min read
Save for later

Directory Services

Packt
29 Sep 2016
11 min read
In this article by Gregory Boyce, author Linux Networking Cookbook, we will focus on getting you started by Configuring Samba as an Active Directory compatible directory service and Joining a Linux box to the domain. (For more resources related to this topic, see here.) If you have worked in corporate environments, then you are probably familiar with a directory service such as Active Directory. What you may not realize is that Samba, originally created to be an open source implementation of Windows file sharing (SMB/CIFS), can now operate as an Active Directory compatible directory service. It can even act as a Backup Domain Controller (BDC) in an Active Directory domain. In this article, we will configure Samba to centralize authentication for your network services. We will also configure a Linux client to leverage it for authentication and set up a RADIUS server, which uses the directory server for authentication. Configuring Samba as an Active Directory compatible directory service As of Samba 4.0, Samba has the ability to act as a primary domain controller (PDC) in a manner that is compatible with Active Directory. How to do it… Installing on Ubuntu 14.04: Configure your system with a static IP address and update /etc/hosts to point to that IP address rather than localhost. Make sure that your time is kept up to date by installing an NTP client: sudo apt-get install ntp Pre-emptively disable smbd/nmbd from running automatically: sudo bash -c 'echo "manual" > /etc/init/nmbd.override' sudo bash –c 'echo "manual" > /etc/init/smbd.override' Install Samba and smbclient: sudo apt-get install samba smbclient Remove stock smb.conf: sudo rm /etc/samba/smb.conf Provision the domain:sudo samba-tool domain provision --realm ad.example.org --domain example --use-rfc2307 --option="interfaces=lo eth1" --option="bind interfaces only=yes" --dns-backend BIND9_DLZ Save the randomly generated admin password. Symlink the AD krb5.conf to /etc: sudo ln -sf /var/lib/samba/private/krb5.conf /etc/krb5.conf Edit /etc/bind/named.conf.local to allow Samba to publish data: dlz "AD DNS Zone" { # For BIND 9.9.0 database "dlopen /usr/lib/x86_64-linux-gnu/samba/bind9/dlz_bind9_9.so"; }; Edit /etc/bind/named.conf.options to use the Kerberos keytab within the options stanza: tkey-gssapi-keytab "/var/lib/samba/private/dns.keytab"; Modify your zone record to allow updates from Samba: zone "example.org" { type master; notify no; file "/var/lib/bind/example.org.db"; update-policy { grant AD.EXAMPLE.ORG ms-self * A AAAA; grant Administrator@AD.EXAMPLE.ORG wildcard * A AAAA SRV CNAME; grant SERVER$@ad.EXAMPLE.ORG wildcard * A AAAA SRV CNAME; grant DDNS wildcard * A AAAA SRV CNAME; }; }; Modify /etc/apparmor.d/usr.sbin.named to allow bind9 access to a few additional resources within the /usr/sbin/named stanza: /var/lib/samba/private/dns/** rw, /var/lib/samba/private/named.conf r, /var/lib/samba/private/named.conf.update r, /var/lib/samba/private/dns.keytab rk, /var/lib/samba/private/krb5.conf r, /var/tmp/* rw, /dev/urandom rw, Reload the apparmor configuration: sudo service apparmor restart Restart bind9: sudo service bind9 restart Restart the samba service: sudo service apparmor restart Installing on CentOS 7: Unfortunately, setting up a domain controller on CentOS 7 is not possible using the default packages provided by the distribution. This is due to Samba utilizing the Heimdal implementation of Kerberos while Red Hat, CentOS, and Fedora using the MIT Kerberos 5 implementation. How it works… The process for provisioning Samba to act as an Active Directory compatible domain is deceptively easy given all that is happening on the backend. Let us look at some of the expectation and see how we are going to meet them as well as what is happening behind the scenes. Active Directory requirements Successfully running an Active Directory Forest has a number of requirements need to be in place: Synchronized time: AD uses Kerberos for authentication, which can be very sensitive to time skews. In our case, we are going to use ntpd, but other options including openntpd or chrony are also available. The ability to manage DNS records: AD automatically generates a number of DNS records, including SRV records that tell clients of the domain how to locate the domain controller itself. A static IP address: Due to a number of pieces of the AD functionality being very dependent on the specific IP address of your domain controller, it is recommended that you use a static IP address. A static DHCP lease may work as long as you are certain the IP address will not change. A rogue DHCP server on the network for example may cause difficulties. Selecting a realm and domain name The Samba team has published some very useful information regarding the proper naming of your realm and your domain along with a link to Microsoft's best practices on the subject. It may be found on: https://wiki.samba.org/index.php/Active_Directory_Naming_FAQ. The short version is that your domain should be globally unique while the realm should be unique within the layer 2 broadcast domain of your network. Preferably, the domain should be a subdomain of a registered domain owned by you. This ensures that you can buy SSL certificates if necessary and you will not experience conflicts with outside resources. Samba-tool will default to using the first part of the domain you specified as the realm, ad from ad.example.org. The Samba group instead recommends using the second part, example in our case, as it is more likely to be locally unique. Using a subdomain of your purchased domain rather than a domain itself makes life easier when splitting internal DNS records, which are managed by your AD instance from the more publicly accessible external names. Using Samba-Tool Samba-tool can work in an automated fashion with command line options, or it can operate in interactive mode. We are going to specify the options that we want to use on the command line: sudo samba-tool domain provision --realm ad.example.org --domain example --use-rfc2307 --option="interfaces=lo eth1" --option="bind interfaces only=yes" --dns-backend BIND9_DLZ The realm and domain options here specify the name for your domain as described above. Since we are going to be supporting Linux systems, we are going to want the AD schema to support RFC2307 settings, which allow for definitions for UID, GID, shell, home directory, and other settings, which Unix systems will require. The pair of options specified on our command-line is used for restricting what interfaces Samba will bind to. While not strictly required, it is a good practice to keep your Samba services bound to the internal interfaces. Finally, samba wants to be able to manage your DNS in order to add systems to the zone automatically. This is handled by a variety of available DNS backends. These include: SAMBA_INTERNAL: This is a built-in method where a samba process acts as a DNS service. This is a good quick option for small networks. BIND9_DLZ: This option allows you to tie your local named/bind9 instance in with your Samba server. It introduces a named plugin for bind versions 9.8.x/9.9.x to support reading host information directly out of the Samba data stores. BIND_FLATFILE: This option is largely deprecated in favor of BIND9_DLZ, but it is still an option if you are running an older version of bind. It causes the Samba services to write out zone files periodically, which Bind may use. Bind configuration Now that Samba is set up to support BIND9_DLZ, we need to configure named to leverage it. There are a few pieces to this support: tkey-gssapi-keytab: This settings in your named options section defines the Kerberos key tab file to use for DNS updates. This allows the Samba server to communicate with the Bind server in order to let it know about zone file changes. dlz setting: This tells bind to load the dynamic module which Samba provides in order to have it read from Samba's data files. Zone updating: In order to be able to update the zone file, you need to switch from an allow-update definition to update-policy, which allows more complex definitions including Kerberos based updates. Apparmor rules changes: Ubuntu uses a Linux Security Module called Apparmor, which allows you to define the allowed actions of a particular executable. Apparmor contains rules restricting the access rights of the named process, but these existing rules do not account for integration with Samba. We need to adjust the existing rules to allow named to access some additional required resources. Joining a Linux box to the domain In order to participate in an AD style domain, you must have the machine joined to the domain using Administrator credentials. This will create the machine's account within the database, and provide credentials to the system for querying the ldap server. How to do it… Install Samba, heimdal-clients, and winbind: sudo apt-get install winbind Populate /etc/samba/smb.conf: [global] workgroup = EXAMPLE realm = ad.example.org security = ads idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = yes winbind enum groups = yes template homedir = /home/%U template shell = /bin/bash winbind use default domain = yes Join the system to the domain: sudo net ads join -U Administrator Configure the system to use winbind for account information in /etc/nsswitch.conf: passwd: compat winbind group: compat winbind How it works… Joining a Linux box to an AD domain, you need to utilize winbind that provides a PAM interface for interacting with Windows RPC calls for user authentication. Winbind requires that you set up your smb.conf file, and then join the domain before it functions. Nsswitch.conf controls how glibc attempts to look up particular types of information. In our case, we are modifying them to talk to winbind for user and group information. Most of the actual logic is in the smb.conf file itself, so let us look: Define the AD Domain we're working with, including both the workgroup/domain and the realm: workgroup = EXAMPLE realm = ad.example.org Now we tell Samba to use Active Directory Services (ADS) security mode: security = ads AD domains use Windows Security IDs (SID) for providing unique user and group identifiers. In order to be compatible with Linux systems, we need to map those SIDs to UIDs and GIDs. Since we're only dealing with a single client for now, we're going to let the local Samba instance map the SIDs to UIDs and GIDs from a range which we provide: idmap uid = 10000-20000 idmap gid = 10000-20000 Some unix utilities such as finger depend on the ability to loop through all of the user/group instances. On a large AD domain, this can be far too many entries so winbind suppresses this capability by default. For now, we're going to want to enable it: winbind enum users = yes winbind enum groups = yes Unless you go through specific steps to populate your AD domain with per-user home directory and shell information, then Winbind will use templates for home directories and shells. We'll want to define these templates in order to avoid the defaults of /home/%D/%U (/home/EXAMPLE/user) and /bin/false: template homedir = /home/%U template shell = /bin/bash The default winbind configuration takes users in the form of username@example.org rather than the more Unix style of user username. Let's override that setting: winbind use default domain = yes Joining a windows box to the domain While not a Linux configuration topic, the most common use for an Active Directory domain is to manage a network of Windows systems. While the overarching topic of managing windows via an AD domain is too large and out of scope for this articl, let's look at how we can join a Windows system to our new domain. How to do it… Click on Start and go to Settings. Click on System. Select About. Select Join a Domain. Type in the name of your domain; ad.example.org in our case. Enter your administrator credentials for the domain. Select a user who will own the system. How it works… When you tell your Windows system to join an AD Domain, it first attempts to find the domain by looking up a series SRV record for the domain, including _ldap._tcp.dc._msdcs.ad.example.org in order to determine what hosts to connect to within the domain for authentication purposes. From there a connection is established. Resources for Article: Further resources on this subject: OpenStack Networking in a Nutshell [article] Zabbix Configuration [article] Supporting hypervisors by OpenNebula [article]
Read more
  • 0
  • 0
  • 11497

article-image-learning-how-manage-records-visualforce
Packt
29 Sep 2016
7 min read
Save for later

Learning How to Manage Records in Visualforce

Packt
29 Sep 2016
7 min read
In this article by Keir Bowden, author of the book, Visualforce Development Cookbook - Second Edition we will cover the following styling fields and table columns as per requirement One of the common use cases for Visualforce pages is to simplify, streamline, or enhance the management of sObject records. In this article, we will use Visualforce to carry out some more advanced customization of the user interface—redrawing the form to change available picklist options, or capturing different information based on the user's selections. (For more resources related to this topic, see here.) Styling fields as required Standard Visualforce input components, such as <apex:inputText />, can take an optional required attribute. If set to true, the component will be decorated with a red bar to indicate that it is required, and form submission will fail if a value has not been supplied, as shown in the following screenshot: In the scenario where one or more inputs are required and there are additional validation rules, for example, when one of either the Email or Phone fields is defined for a contact, this can lead to a drip feed of error messages to the user. This is because the inputs make repeated unsuccessful attempts to submit the form, each time getting slightly further in the process. Now, we will create a Visualforce page that allows a user to create a contact record. The Last Name field is captured through a non-required input decorated with a red bar identical to that created for required inputs. When the user submits the form, the controller validates that the Last Name field is populated and that one of the Email or Phone fields is populated. If any of the validations fail, details of all errors are returned to the user. Getting ready This topic makes use of a controller extension so this must be created before the Visualforce page. How to do it… Navigate to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the RequiredStylingExt.cls Apex class from the code downloaded into the Apex Class area. Click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Click on the New button. Enter RequiredStyling in the Label field. Accept the default RequiredStyling that is automatically generated for the Name field. Paste the contents of the RequiredStyling.page file from the code downloaded into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Locate the entry for the RequiredStyling page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the RequiredStyling page to create a new contact record: https://<instance>/apex/RequiredStyling. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com. Clicking on the Save button without populating any of the fields results in the save failing with a number of errors: The Last Name field is constructed from a label and text input component rather than a standard input field, as an input field would enforce the required nature of the field and stop the submission of the form: <apex:pageBlockSectionItem > <apex:outputLabel value="Last Name"/> <apex:outputPanel id="detailrequiredpanel" layout="block" styleClass="requiredInput"> <apex:outputPanel layout="block" styleClass="requiredBlock" /> <apex:inputText value="{!Contact.LastName}"/> </apex:outputPanel> </apex:pageBlockSectionItem> The required styles are defined in the Visualforce page rather than relying on any existing Salesforce style classes to ensure that if Salesforce changes the names of its style classes, this does not break the page. The controller extension save action method carries out validation of all fields and attaches error messages to the page for all validation failures: if (String.IsBlank(cont.name)) { ApexPages.addMessage(new ApexPages.Message( ApexPages.Severity.ERROR, 'Please enter the contact name')); error=true; } if ( (String.IsBlank(cont.Email)) && (String.IsBlank(cont.Phone)) ) { ApexPages.addMessage(new ApexPages.Message( ApexPages.Severity.ERROR, 'Please supply the email address or phone number')); error=true; } Styling table columns as required When maintaining records that have required fields through a table, using regular input fields can end up with an unsightly collection of red bars striped across the table. Now, we will create a Visualforce page to allow a user to create a number of contact records via a table. The contact Last Name column header will be marked as required, rather than the individual inputs. Getting ready This topic makes use of a custom controller, so this will need to be created before the Visualforce page. How to do it… First, create the custom controller by navigating to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the RequiredColumnController.cls Apex class from the code downloaded into the Apex Class area. Click on the Save button. Next, create a Visualforce page by navigating to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Click on the New button. Enter RequiredColumn in the Label field. Accept the default RequiredColumn that is automatically generated for the Name field. Paste the contents of the RequiredColumn.page file from the code downloaded into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Locate the entry for the RequiredColumn page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the RequiredColumn page: https://<instance>/apex/RequiredColumn. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com. The Last Name column header is styled in red, indicating that this is a required field. Attempting to create a record where only First Name is specified results in an error message being displayed against the Last Name input for the particular row: The Visualforce page sets the required attribute on the inputField components in the Last Name column to false, which removes the red bar from the component: <apex:column > <apex:facet name="header"> <apex:outputText styleclass="requiredHeader" value="{!$ObjectType.Contact.fields.LastName.label}" /> </apex:facet> <apex:inputField value="{!contact.LastName}" required="false"/> </apex:column> The Visualforce page custom controller Save method checks if any of the fields in the row are populated, and if this is the case, it checks that the last name is present. If the last name is missing from any record, an error is added. If an error is added to any record, the save does not complete: if ( (!String.IsBlank(cont.FirstName)) || (!String.IsBlank(cont.LastName)) ) { // a field is defined - check for last name if (String.IsBlank(cont.LastName)) { error=true; cont.LastName.addError('Please enter a value'); } String.IsBlank() is used as this carries out three checks at once: to check that the supplied string is not null, it is not empty, and it does not only contain whitespace. Summary Thus in this article we successfully mastered the techniques to style fields and table columns as per the custom needs. Resources for Article: Further resources on this subject: Custom Components in Visualforce [Article] Visualforce Development with Apex [Article] Using Spring JMX within Java Applications [Article]
Read more
  • 0
  • 0
  • 8891

article-image-google-forms-multiple-choice-and-fill-blank-assignments
Packt
28 Sep 2016
14 min read
Save for later

Google Forms for Multiple Choice and Fill-in-the-blank Assignments

Packt
28 Sep 2016
14 min read
In this article by Michael Zhang, the author of the book Teaching with Google Classroom, we will see how to create multiple choice and fill-in-the-blank assignments using Google Forms. (For more resources related to this topic, see here.) The third-party app, Flubaroo will help grade multiple choice, fill-in-the-blank, and numeric questions. Before you can use Flubaroo, you will need to create the assignment and deploy it on Google Classroom. The Form app of Google within the Google Apps for Education (GAFE) allows you to create online surveys, which you can use as assignments. Google Forms then outputs the values of the form into a Google Sheet, where the Google Sheet add-on, Flubaroo grades the assignment. After using Google Forms and Flubaroo for assignments, you may decide to also use it for exams. However, while Google Forms provides a means for creating the assessment and Google Classroom allows you to easily distribute it to your students, there is no method to maintain security of the assessment. Therefore, if you choose to use this tool for summative assessment, you will need to determine an appropriate level of security. (Often, there is nothing that prevents students from opening a new tab and searching for an answer or from messaging classmates.) For example, in my classroom, I adjusted the desks so that there was room at the back of the classroom to pace during a summative assessment. Additionally, some school labs include a teacher's desktop that has software to monitor student desktops. Whatever method you choose, take precautions to ensure the authenticity of student results when assessing students online. Google Forms is a vast Google App that requires its own book to fully explore its functionality. Therefore, the various features you will explore in this article will focus on the scope of creating and assessing multiple choice and fill-in-the-blank assignments. However, once you are familiar with Google Forms, you will find additional applications. For example, in my school, I work with the administration to create forms to collect survey data from stakeholders such as staff, students, and parents. Recently, for our school's annual Open House, I created a form to record the number of student volunteers so that enough food for the volunteers would be ordered. Also, during our school's major fundraiser, I developed a Google Form for students to record donations so that reports could be generated from the information more quickly than ever before. The possibilities of using Google Forms within a school environment are endless! In this article, you will explore the following topics: Creating an assignment with Google Forms Installing the Flubaroo Google Sheets add-on Assessing an assignment with Flubaroo Creating a Google Form Since Google Forms is not as well known as apps such as Gmail or Google Calendar, it may not be immediately visible in the App Launcher. To create a Google Form, follow these instructions: In the App Launcher, click on the More section at the bottom: Click on the Google Forms icon: If there is still no Google Forms app icon, open a new tab and type forms.google.com into the address bar. Click on the Blank template to create a new Google Form: Google Forms has a recent update to Google's Material Design interface. This article will use screenshots from the new Google Forms look. Therefore, if you see a banner with Try the new Google Forms, click on the banner to launch the new Google Forms App: To name the Google Form, click on Untitled form in the top-left corner and type in the name. This will also change the name of the form. If necessary, you can click on the form title to change the title afterwards: Optionally, you can add a description to the Google Form directly below the form title: Often, I use the description to provide further instructions or information such as time limit, whether dictionaries or other reference books are permissible or even website addresses to where they can find information related to the assignment. Adding questions to a Google form By default, each new Google Form will already have a multiple choice card inserted into the form. In order to access the options, click on anywhere along the white area beside Untitled Question: The question will expand to form a question card where you can make changes to the question: Type the question stem in the Untitled Question line. Then, click on Option 1 to create a field to change it to a selection: To add additional selectors, click on the Add option text below the current selector or simply press the Enter key on the keyboard to begin the next selector. Because of the large number of options in a question card, the following screenshot provides a brief description of these options: A: Question title B: Question options C: Move the option indicator. Hovering your mouse over an option will show this indicator that you can click and drag to reorder your options. D: Move the question indicator. Clicking and dragging this indicator will allow you to reorder your questions within the assignment. E: Question type drop-down menu. There are several types of questions you can choose from. However, not all will work with the Flubaroo grading add-on. The following screenshot displays all question types available: F: Remove option icon. G: Duplicate question button. Google Forms will make a copy of the current question. H: Delete question button. I: Required question switch. By enabling this option, students must answer this question in order to complete the assignment. J: More options menu. Depending on the type of question, this section will provide options to enable a hint field below the question title field, create nonlinear multiple choice assignments, and validate data entered into a specific field. Flubaroo grades the assignment from the Google Sheet that Google Forms creates. It matches the responses of the students with an answer key. While there is tolerance for case sensitivity and a range of number values, it cannot effectively grade answers in the sentence or paragraph form. Therefore, use only short answers for the fill-in-the-blank or numerical response type questions and avoid using paragraph questions altogether for Flubaroo graded assignments. Once you have completed editing your question, you can use the side menu to add additional questions to your assignment. You can also add section headings, images, YouTube videos, and additional sections to your assignment. The following screenshot provides a brief legend for the icons:   To create a fill-in-the-blank question, use the short answer question type. When writing the question stem, use underscores to indicate where the blank is in the question. You may need to adjust the wording of your fill-in-the-blank questions when using Google Forms. Following is an example of a fill-in-the-blank question: Identify your students Be sure to include fields for your students name and e-mail address. The e-mail address is required so that Flubaroo can e-mail your student their responses when complete. Google Forms within GAFE also has an Automatically collect the respondent's username option in the Google Form's settings, found in the gear icon. If you use the automatic username collection, you do not need to include the name and e-mail fields. Changing the theme of a Google form Once you have all the questions in your Google Form, you can change the look and feel of the Google Form. To change the theme of your assignment, use the following steps: Click on the paint pallet icon in the top-right corner of the Google Form: For colors, select the desired color from the options available. If you want to use an image theme, click on the image icon at the bottom right of the menu: Choose a theme image. You can narrow the type of theme visible by clicking on the appropriate category in the left sidebar: Another option is to upload your own image as the theme. Click on the Upload photos option in the sidebar or select one image from your Google Photos using the Your Albums option. The application for Google Forms within the classroom is vast. With the preceding features, you can add images and videos to your Google Form. Furthermore, in conjunction with the Google Classroom assignments, you can add both a Google Doc and a Google Form to the same assignment. An example of an application is to create an assignment in Google Classroom where students must first watch the attached YouTube video and then answer the questions in the Google Form. Then Flubaroo will grade the assignment and you can e-mail the students their results. Assigning the Google Form in Google classroom Before you assign your Google Form to your students, preview the form and create a key for the assignment by filling out the form first. By doing this first, you will catch any errors before sending the assignment to your students, and it will be easier to find when you have to grade the assignment later. Click on the eye shaped preview icon in the top-right corner of the Google form to go to the live form: Fill out the form with all the correct answers. To find this entry later, I usually enter KEY in the name field and my own e-mail address for the e-mail field. Now the Google Form is ready to be assigned in Google Classroom. In Google Classroom, once students have submitted a Google Form, Google Classroom will automatically mark the assignment as turned in. Therefore, if you are adding multiple files to an assignment, add the Google Form last and avoid adding multiple Google Forms to a single assignment. To add a Google Form to an assignment, follow these steps: In the Google Classroom assignment, click on the Google Drive icon: Select the Google Form and click on the Add button: Add any additional information and assign the assignment. Installing Flubaroo Flubaroo, like Goobric and Doctopus, is a third-party app that provides additional features that help save time grading assignments. Flubaroo requires a one-time installation into Google Sheets before it can grade Google Form responses. While we can install the add-on in any Google Sheet, the following steps will use the Google Sheet created by Google Forms: In the Google Form, click on the RESPONSES tab at the top of the form: Click on the Google Sheets icon: A pop-up will appear. The default selection is to create a new Google Sheet. Click on the CREATE button: A new tab will appear with a Google Sheet with the Form's responses. Click on the Add-ons menu and select Get add-ons…: Flubaroo is a popular add-on and may be visible in the first few apps to click on. If not, search for the app with the search field and then click on it in the search results. Click on the FREE button: The permissions pop-up will appear. Scroll to the bottom and click on the Allow button to activate Flubaroo: A pop-up and sidebar will appear in Google Sheets to provide announcements and additional instructions to get started: Assessing using Flubaroo When your students have submitted their Google Form assignment, you can grade them with Flubaroo. There are two different settings for grading with it—manual and automatic. Manual grading will only grade responses when you initiate the grading; whereas, automatic grading will grade responses as they are submitted. Manual grading To assess a Google Form assignment with Flubaroo, follow these steps: If you have been following along from the beginning of the article, select Grade Assignment in the Flubaroo submenu of the Add-ons menu: If you have installed Flubaroo in a Google Sheet that is not the form responses, you will need to first select Enable Flubaroo in this sheet in the Flubaroo submenu before you will be able to grade the assignment: A pop-up will guide you through the various settings of Flubaroo. The first page is to confirm the columns in the Google Sheet. Flubaroo will guess whether the information in a column identifies the student or is graded normally. Under the Grading Options drop-down menu, you can also select Skip Grading or Grade by Hand. If the question is undergoing normal grading, you can choose how many points each question is worth. Click on the Continue button when all changes are complete: In my experience, Flubaroo accurately guesses which fields identify the student. Therefore, I usually do not need to make changes to this screen unless I am skipping questions or grading certain ones by hand. The next page shows all the submissions to the form. Click on the radio button beside the submission that is the key and then click on the Continue button: Flubaroo will show a spinning circle to indicate that it is grading the assignment. It will finish when you see the following pop-up: When you close the pop-up, you will see a new sheet created in the Google Sheet summarizing the results. You will see the class average, the grades of individual students as well as the individual questions each student answered correctly: Once Flubaroo grades the assignment, you can e-mail students the results. In the Add-ons menu, select Share Grades under the Flubaroo submenu: A new pop-up will appear. It will have options to select the appropriate column for the e-mail of each submission, the method to share grades with the students, whether to list the questions so that students know which questions they got right and which they got wrong, whether to include an answer key, and a message to the students. The methods to share grades include e-mail, a file in Google Drive, or both. Once you have chosen your selections, click on the Continue button: A pop-up will confirm that the grades have successfully been e-mailed. Google Apps has a daily quota of 2000 sent e-mails (including those sent in Gmail or any other Google App). While normally not an issue. If you are using Flubaroo on a large scale, such as a district-wide Google Form, this limit may prevent you from e-mailing results to students. In this case, use the Google Drive option instead. If needed, you can regrade submissions. By selecting this option in the Flubaroo submenu, you will be able to change settings, such as using a different key, before Flubaroo will regrade all the submissions. Automatic grading Automatic grading provides students with immediate feedback once they submit their assignments. You can enable automatic grading after first setting up manual grading so that any late assignments get graded. Or you can enable automatic grading before assigning the assignment. To enable automatic grading on a Google Sheet that has already been manually graded, select Enable Autograde from the Advanced submenu of Flubaroo, as shown in the following screenshot: A pop-up will appear allowing you to update the grading or e-mailing settings that were set during the manual grading. If you select no, then you will be taken through all the pop-up pages from the Manual grading section so that you can make necessary changes. If you have not graded the assignment manually, when you select Enable Autograde, you will be prompted by a pop-up to set up grading and e-mailing settings, as shown in the following screenshot. Clicking on the Ok button will take you through the setting pages shown in the preceding Manual grading section: Summary In this article, you learned how to create a Google Form, assign it in Google Classroom, and grade it with the Google Sheet's Flubaroo add-on. Using all these apps to enhance Google Classroom shows how the apps in the GAFE suite interact with each other to provide a powerful tool for you. Resources for Article: Further resources on this subject: Mapping Requirements for a Modular Web Shop App [article] Using Spring JMX within Java Applications [article] Fine Tune Your Web Application by Profiling and Automation [article]
Read more
  • 0
  • 2
  • 37110
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-extra-extra-collection-and-closure-changes-rock
Packt
28 Sep 2016
19 min read
Save for later

Extra, Extra Collection, and Closure Changes that Rock!

Packt
28 Sep 2016
19 min read
In this article by Keith Elliott, author of, Swift 3 New Features , we are focusing on collection and closure changes in Swift 3. There are several nice additions that will make working with collections even more fun. We will also explore some of the confusing side effects of creating closures in Swift 2.2 and how those have been fixed in Swift 3. (For more resources related to this topic, see here.) Collection and sequence type changes Let’s begin our discussion with Swift 3 changes to Collection and Sequence types. Some of the changes are subtle and others are bound to require a decent amount of refactoring to your custom implementations. Swift provides three main collection types for warehousing your values: arrays, dictionaries, and sets. Arrays allow you to store values in an ordered list. Dictionaries provide unordered the key-value storage for your collections. Finally, sets provide an unordered list of unique values (that is, no duplicates allowed). Lazy FlatMap for sequence of optionals [SE-0008] Arrays, sets, and dictionaries are implemented as generic types in Swift. They each implement the new Collection protocol, which implements the Sequence protocol. Along this path from top-level type to Sequence protocol, you will find various other protocols that are also implemented in this inheritance chain. For our discussion on flatMap and lazy flatMap changes, I want to focus in on Sequences. Sequences contain a group of values that allow the user to visit each value one at a time. In Swift, you might consider using a for-in loop to iterate through your collection. The Sequence protocol provides implementations of many operations that you might want to perform on a list using sequential access, all of which you could override when you adopt the protocol in your custom collections. One such operation is the flatMap function, which returns an array containing the flattened (or rather concatenated) values, resulting from a transforming operation applied to each element of the sequence. let scores = [0, 5, 6, 8, 9] .flatMap{ [$0, $0 * 2] } print(scores) // [0, 0, 5, 10, 6, 12, 8, 16, 9, 18]   In our preceding example, we take a list of scores and call flatMap with our transforming closure. Each value is converted into a sequence containing the original value and a doubled value. Once the transforming operations are complete, the flatMap method flattens the intermediate sequences into a single sequence. We can also use the flatMap method with Sequences that contain optional values to accomplish a similar outcome. This time we are omitting values from the sequence we flatten by return nil on the transformation. let oddSquared = [1, 2, 3, 4, 5, 10].flatMap { n in n % 2 == 1 ? n*n : nil } print(oddSquared) // [1, 9, 25] The previous two examples were fairly basic transformations on small sets of values. In a more complex situation, the collections that you need to work with might be very large with expensive transformation operations. Under those parameters, you would not want to perform the flatMap operation or any other costly operation until it was absolutely needed. Luckily, in Swift, we have lazy operations for this very use case. Sequences contain a lazy property that returns a LazySequence that can perform lazy operations on Sequence methods. Using our first example, we can obtain a lazy sequence and call flatMap to get a lazy implementation. Only in the lazy scenario, the operation isn’t completed until scores is used sometime later in code. let scores = [0, 5, 6, 8, 9] .lazy .flatMap{ [$0, $0 * 2] } // lazy assignment has not executed for score in scores{ print(score) } The lazy operation works, as we would expect in our preceding test. However, when we use the lazy form of flatMap with our second example that contains optionals, our flatMap executes immediately in Swift 2. While we expected oddSquared variable to hold a ready to run flatMap, delayed until we need it, we instead received an implementation that was identical to the non-lazy version. let oddSquared = [1, 2, 3, 4, 5, 10] .lazy // lazy assignment but has not executed .flatMap { n in n % 2 == 1 ? n*n : nil } for odd in oddSquared{ print(odd) } Essentially, this was a bug in Swift that has been fixed in Swift 3. You can read the proposal at the following link https://github.com/apple/swift-evolution/blob/master/proposals/0008-lazy-flatmap-for-optionals.md Adding the first(where:) method to sequence A common task for working with collections is to find the first element that matches a condition. An example would be to ask for the first student in an array of students whose test scores contain a 100. You can accomplish this using a predicate to return the filtered sequence that matched the criteria and then just give back the first student in the sequence. However, it would be much easier to just call a single method that could return the item without the two-step approach. This functionality was missing in Swift 2, but was voted in by the community and has been added for this release. In Swift 3, there is a now an extension method on the Sequence protocol to implement first(where:). ["Jack", "Roger", "Rachel", "Joey"].first { (name) -> Bool in name.contains("Ro") } // =>returns Roger This first(where:) extension is a nice addition to the language because it ensures that a simple and common task is actually easy to perform in Swift. You can read the proposal at the following link https://github.com/apple/swift-evolution/blob/master/proposals/0032-sequencetype-find.md. Add sequence(first: next:) and sequence(state: next:) public func sequence<T>(first: T, next: @escaping (T) -> T?) -> UnfoldSequence<T, (T?, Bool)> public func sequence<T, State>(state: State, next: @escaping (inout State) -> T?) -> UnfoldSequence<T, State> public struct UnfoldSequence<Element, State> : Sequence, IteratorProtocol These two functions were added as replacements to the C-style for loops that were removed in Swift 3 and to serve as a compliment to the global reduce function that already exists in Swift 2. What’s interesting about the additions is that each function has the capability of generating and working with infinite sized sequences. Let’s examine the first sequence function to get a better understanding of how it works. /// - Parameter first: The first element to be returned from the sequence. /// - Parameter next: A closure that accepts the previous sequence element and /// returns the next element. /// - Returns: A sequence that starts with `first` and continues with every /// value returned by passing the previous element to `next`. /// func sequence<T>(first: T, next: @escaping (T) -> T?) -> UnfoldSequence<T, (T?, Bool)> The first sequence method returns a sequence that is created from repeated invocations of the next parameter, which holds a closure that will be lazily executed. The return value is an UnfoldSequence that contains the first parameter passed to the sequence method plus the result of applying the next closure on the previous value. The sequence is finite if next eventually returns nil and is infinite if next never returns nil. let mysequence = sequence(first: 1.1) { $0 < 2 ? $0 + 0.1 : nil } for x in mysequence{ print (x) } // 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 In the preceding example, we create and assign our sequence using the trailing closure form of sequence(first: next:). Our finite sequence will begin with 1.1 and will call next repeatedly until our next result is greater than 2 at which case next will return nil. We could easily convert this to an infinite sequence by removing our condition that our previous value must not be greater than 2. /// - Parameter state: The initial state that will be passed to the closure. /// - Parameter next: A closure that accepts an `inout` state and returns the /// next element of the sequence. /// - Returns: A sequence that yields each successive value from `next`. /// public func sequence<T, State>(state: State, next: (inout State) -> T?) -> UnfoldSequence<T, State> The second sequence function maintains mutable state that is passed to all lazy calls of next to create and return a sequence. This version of the sequence function uses a passed in closure that allows you to update the mutable state each time the next called. As was the case with our first sequence function, a finite sequence ends when next returns a nil. You can turn an finite sequence into an infinite one by never returning nil when next is called. Let’s create an example of how this version of the sequence method might be used. Traversing a hierarchy of views with nested views or any list of nested types is a perfect task for using the second version of the sequence function. Let’s create a an Item class that has two properties. A name property and an optional parent property to keep track of the item’s owner. The ultimate owner will not have a parent, meaning the parent property will be nil. class Item{ var parent: Item? var name: String = "" } Next, we create a parent and two nested children items. Child1 parent will be the parent item and child2 parent will be child1. let parent = Item() parent.name = "parent" let child1 = Item() child1.name = "child1" child1.parent = parent let child2 = Item() child2.name = "child2" child2.parent = child1 Now, it’s time to create our sequence. The sequence needs two parameters from us: a state parameter and a next closure. I made the state an Item with an initial value of child2. The reason for this is because I want to start at the lowest leaf of my tree and traverse to the ultimate parent. Our example only has three levels, but you could have lots of levels in a more complex example. As for the next parameter, I’m using a closure expression that expects a mutable Item as its state. My closure will also return an optional Item. In the body of our closure, I use our current Item (mutable state parameter) to access the item’s parent. I also updated the state and return the parent. let itemSeq = sequence(state: child2, next: { (next: inout Item)->Item? in let parent = next.parent next = parent != nil ? parent! : next return parent }) for item in itemSeq{ print("name: (item.name)") } There are some gotchas here that I want to address so that you will better understand how to define your own next closure for this sequence method. The state parameter could really be anything you want it to be. It’s for your benefit in helping you determine the next element of the sequence and to give you relevant information about where you are in the sequence. One idea to improve our example above would be to track how many levels of nesting we have. We could have made our state a tuple that contained an integer counter for the nesting level along with the current item. The next closure needs to be expanded to show the signature. Because of Swift’s expressiveness and conciseness, when it comes to closures, you might be tempted to convert the next closure into a shorter form and omit the signature. Do not do this unless your next closure is extremely simple and you are positive that the compiler will be able to infer your types. Your code will be harder to maintain when you use the short closure format, and you won’t get extra points for style when someone else inherits it. Don’t forget to update your state parameter in the body of your closure. This really is your best chance to know where you are in your sequence. Forgetting to update the state will probably cause you to get unexpected results when you try to step through your sequence. Make a clear decision ahead of the time about whether you are creating a finite or infinite sequence. This decision is evident in how you return from your next closure. An infinite sequence is not bad to have when you are expecting it. However, if you iterate over this sequence using a for-in loop, you could get more than you bargained for, provided you were assuming this loop would end. A new Model for Collections and Indices [SE-0065]Swift 3 introduces a new model for collections that moves the responsibility of the index traversal from the index to the collection itself. To make this a reality for collections, the Swift team introduced four areas of change: The Index property of a collection can be any type that implements the Comparable protocol Swift removes any distinction between intervals and ranges, leaving just ranges Private index traversal methods are now public Changes to ranges make closed ranges work without the potential for errors You can read the proposal at the following link https://github.com/apple/swift-evolution/blob/master/proposals/0065-collections-move-indices.md. Introducing the collection protocol In Swift 3, Foundation collection types such as Arrays, Sets, and Dictionaries are generic types that implement the newly created Collection protocol. This change was needed in order to support traversal on the collection. If you want to create custom collections of your own, you will need to understand the Collection protocol and where it lives in the collection protocol hierarchy. We are going to cover the important aspects to the new collection model to ease you transition and to get you ready to create custom collection types of your own. The Collection protocol builds on the Sequence protocol to provide methods for accessing specific elements when using a collection. For example, you can use a collection’s index(_:offsetBy:) method to return an index that is a specified distance away from the reference index. let numbers = [10, 20, 30, 40, 50, 60] let twoAheadIndex = numbers.index(numbers.startIndex, offsetBy: 2) print(numbers[twoAheadIndex]) //=> 30 In the preceding example, we create the twoAheadIndex constant to hold the position in our numbers collection that is two positions away from our starting index. We simply use this index to retrieve the value from our collection using subscript notation. Conforming to the Collection Protocol If you would like to create your own custom collections, you need to adopt the Collection protocol by declaring startIndex and endIndex properties, a subscript to support access to your elements, and the index(after: ) method to facilitate traversing your collection’s indices. When we are migrating existing types over to Swift 3, the migrator has some known issues with converting custom collections. It’s likely that you can easily resolve the compiler issues by checking the imported types for conformance to the Collection protocol. Additionally, you need to conform to the Sequence and IndexableBase protocols as the Collection protocol adopts them both. public protocol Collection : Indexable, Sequence { … } A simple custom collection could look like the following example. Note that I have defined my Index type to be an Int. In Swift 3, you define the Index to be any type that implements the Comparable protocol. struct MyCollection<T>: Collection{ typealias Index = Int var startIndex: Index var endIndex: Index var _collection: [T] subscript(position: Index) -> T{ return _collection[position] } func index(after i: Index) -> Index { return i + 1 } init(){ startIndex = 0 endIndex = 0 _collection = [] } mutating func add(item: T){ _collection.append(item) } } var myCollection: MyCollection<String> = MyCollection() myCollection.add(item: "Harry") myCollection.add(item: "William") myCollection[0] The Collection protocol has default implementations for most of its methods, the Sequence protocols methods, and the IndexableBase protocols methods. This means you are only required to provide a few things of your own. You can, however, implement as many of the other methods as make sense for your collection. New Range and Associated Indices Types Swift 2’s Range<T>, ClosedInterval<T>, and OpenInterval<T> are going away in Swift 3. These types are being replaced with four new types. Two of the new range types support general ranges with bounds that implement the Comparable protocol: Range<T> and ClosedRange<T>. The other two range types conform to RandomAccessCollection. These types support ranges whose bounds implement the Strideable protocol. Last, ranges are no longer iterable since ranges are now represented as a pair of indices. To keep legacy code working, the Swift team introduced an associated indices type, which is iterable. In addition, three generic types were created to provide a default indices type for each type of collection traversal category. The generics are DefaultIndices<C>, DefaultBidirectionalIndices<C>, and DefaultRandomAccessIndices<C>; each stores its underlying collection for traversal. Quick Takeaways I covered a lot of stuff in a just a few pages on collection types in Swift 3. Here are the highlights to keep in mind about the collections and indices. Collections types (built-in and custom) implement the Collection protocol. Iterating over collections has moved to the collection—the index no longer has that ability. You can create your own collections by adopting the Collection protocol. You need to implement: startIndex and endIndex properties The subscript method to support access to your elements The index(after: ) method to facilitate traversing your collection’s indices Closure changes for Swift 3 A closure in Swift is a block of code that can be used in a function call as a parameter or assigned to a variable to execute their functionality at a later time. Closures are a core feature to Swift and are familiar to developers that are new to Swift as they remind you of lambda functions in other programming languages. For Swift 3, there were two notable changes that I will highlight in this section. The first change deals with inout captures. The second is a change that makes non-escaping closures the default. Limiting inout Capture of @noescape Closures]In Swift 2, capturing inout parameters in an escaping closure was difficult for developers to understand. Closures are used everywhere in Swift especially in the standard library and with collections. Some closures are assigned to variables and then passed to functions as arguments. If the function that contains the closure parameter returns from its call and the passed in closure is used later, then you have an escaping closure. On the other hand, if the closure is only used within the function to which it is passed and not used later, then you have a nonescaping closure. The distinction is important here because of the mutating nature of inout parameters. When we pass an inout parameter to a closure, there is a possibility that we will not get the result we expect due to how the inout parameter is stored. The inout parameter is captured as a shadow copy and is only written back to the original if the value changes. This works fine most of the time. However, when the closure is called at a later time (that is, when it escapes), we don’t get the result we expect. Our shadow copy can’t write back to the original. Let’s look at an example. var seed = 10 let simpleAdderClosure = { (inout seed: Int)->Int in seed += 1 return seed * 10 } var result = simpleAdderClosure(&seed) //=> 110 print(seed) // => 11 In the preceding example, we get what we expect. We created a closure to increment our passed in inout parameter and then return the new parameter multiplied by 10. When we check the value of seed after the closure is called, we see that the value has increased to 11. In our second example, we modify our closure to return a function instead of just an Int value. We move our logic to the closure that we are defining as our return value. let modifiedClosure = { (inout seed: Int)-> (Int)->Int in return { (Int)-> Int in seed += 1 return seed * 10 } } print(seed) //=> 11 var resultFn = modifiedClosure(&seed) var result = resultFn(1) print(seed) // => 11 This time when we execute the modifiedClosure with our seed value, we get a function as the result. After executing this intermediate function, we check our seed value and see that the value is unchanged; even though, we are still incrementing the seed value. These two slight differences in syntax when using inout parameters generate different results. Without knowledge of how shadow copy works, it would be hard understand the difference in results. Ultimately, this is just another situation where you receive more harm than good by allowing this feature to remain in the language. You can read the proposal at the following link https://github.com/apple/swift-evolution/blob/master/proposals/0035-limit-inout-capture.md. Resolution In Swift 3, the compiler now limits inout parameter usage with closures to non-escaping (@noescape). You will receive an error if the compiler detects that your closure escapes when it contains inout parameters. Making non-escaping closures the default [SE-0103] You can read the proposal at https://github.com/apple/swift-evolution/blob/master/proposals/0103-make-noescape-default.md. In previous versions of Swift, the default behavior of function parameters whose type was a closure was to allow escaping. This made sense as most of the Objective-C blocks (closures in Swift) imported into Swift were escaping. The delegation pattern in Objective-C, as implemented as blocks, was composed of delegate blocks that escaped. So, why would the Swift team want to change the default to non-escaping as the default? The Swift team believes you can write better functional algorithms with non-escaping closures. An additional supporting factor is the change to require non-escaping closures when using inout parameters with the closure [SE-0035]. All things considered, this change will likely have little impact on your code. When the compiler detects that you are attempting to create an escaping closure, you will get an error warning that you are possibly creating an escaping closure. You can easily correct the error by adding @escaping or via the fixit that accompanies the error. In Swift 2.2: var callbacks:[String : ()->String] = [:] func myEscapingFunction(name:String, callback:()->String){ callbacks[name] = callback } myEscapingFunction("cb1", callback: {"just another cb"}) for cb in callbacks{ print("name: (cb.0) value: (cb.1())") } In Swift 3: var callbacks:[String : ()->String] = [:] func myEscapingFunction(name:String, callback: @escaping ()->String){ callbacks[name] = callback } myEscapingFunction(name:"cb1", callback: {"just another cb"}) for cb in callbacks{ print("name: (cb.0) value: (cb.1())") } Summary In this article, we covered changes to collections and closures. You learned about the new Collection protocol that forms the base of the new collection model and how to adopt the protocol in our own custom collections. The new collection model made a significant change in moving collection traversal from the index to the collection itself. The new collection model changes are necessary in order to support Objective-C interactivity and to provide a mechanism to iterate over the collections items using the collection itself. As for closures, we also explored the motivation for the language moving to non-escaping closures as the default. You also learned how to properly use inout parameters with closures in Swift 3. Resources for Article: Further resources on this subject: Introducing the Swift Programming Language [article] Concurrency and Parallelism with Swift 2 [article] Exploring Swift [article]
Read more
  • 0
  • 0
  • 14739

article-image-unsupervised-learning
Packt
28 Sep 2016
11 min read
Save for later

Unsupervised Learning

Packt
28 Sep 2016
11 min read
In this article by Bastiaan Sjardin, Luca Massaron, and Alberto Boschetti, the authors of the book Large Scale Machine Learning with Python, we will try to create new features and variables at scale in the observation matrix. We will introduce the unsupervised methods and illustrate principal component analysis (PCA)—an effective way to reduce the number of features. (For more resources related to this topic, see here.) Unsupervised methods Unsupervised learning is a branch of machine learning whose algorithms reveal inferences from data without an explicit label (unlabeled data). The goal of such techniques is to extract hidden patterns and group similar data. In these algorithms, the unknown parameters of interests of each observation (the group membership and topic composition, for instance) are often modeled as latent variables (or a series of hidden variables), hidden in the system of observed variables that cannot be observed directly, but only deduced from the past and present outputs of the system. Typically, the output of the system contains noise, which makes this operation harder. In common problems, unsupervised methods are used in two main situations: With labeled datasets to extract additional features to be processed by the classifier/regressor down to the processing chain. Enhanced by additional features, they may perform better. With labeled or unlabeled datasets to extract some information about the structure of the data. This class of algorithms is commonly used during the Exploratory Data Analysis (EDA) phase of the modeling. First at all, before starting with our illustration, let's import the modules that will be necessary along the article in our notebook: In : import matplotlib import numpy as np import pandas as pd import matplotlib.pyplot as plt from matplotlib import pylab %matplotlib inline import matplotlib.cm as cm import copy import tempfile import os   Feature decomposition – PCA PCA is an algorithm commonly used to decompose the dimensions of an input signal and keep just the principal ones. From a mathematical perspective, PCA performs an orthogonal transformation of the observation matrix, outputting a set of linear uncorrelated variables, named principal components. The output variables form a basis set, where each component is orthonormal to the others. Also, it's possible to rank the output components (in order to use just the principal ones) as the first component is the one containing the largest possible variance of the input dataset, the second is orthogonal to the first (by definition) and contains the largest possible variance of the residual signal, and the third is orthogonal to the first two and contains the largest possible variance of the residual signal, and so on. A generic transformation with PCA can be expressed as a projection to a space. If just the principal components are taken from the transformation basis, the output space will have a smaller dimensionality than the input one. Mathematically, it can be expressed as follows: Here, X is a generic point of the training set of dimension N, T is the transformation matrix coming from PCA, and  is the output vector. Note that the symbol indicates a dot product in this matrix equation. From a practical perspective, also note that all the features of X must be zero-centered before doing this operation. Let's now start with a practical example; later, we will explain math PCA in depth. In this example, we will create a dummy dataset composed of two blobs of points—one cantered in (-5, 0) and the other one in (5,5).Let's use PCA to transform the dataset and plot the output compared to the input. In this simple example, we will use all the features, that is, we will not perform feature reduction: In:from sklearn.datasets.samples_generator import make_blobs from sklearn.decomposition import PCA X, y = make_blobs(n_samples=1000, random_state=101, centers=[[-5, 0], [5, 5]]) pca = PCA(n_components=2) X_pca = pca.fit_transform(X) pca_comp = pca.components_.T test_point = np.matrix([5, -2]) test_point_pca = pca.transform(test_point) plt.subplot(1, 2, 1) plt.scatter(X[:, 0], X[:, 1], c=y, edgecolors='none') plt.quiver(0, 0, pca_comp[:,0], pca_comp[:,1], width=0.02, scale=5, color='orange') plt.plot(test_point[0, 0], test_point[0, 1], 'o') plt.title('Input dataset') plt.subplot(1, 2, 2) plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y, edgecolors='none') plt.plot(test_point_pca[0, 0], test_point_pca[0, 1], 'o') plt.title('After "lossless" PCA') plt.show()   As you can see, the output is more organized than the original features' space and, if the next task is a classification, it would require just one feature of the dataset, saving almost 50% of the space and computation needed. In the image, you can clearly see the core of PCA: it's just a projection of the input dataset to the transformation basis drawn in the image on the left in orange. Are you unsure about this? Let's test it: In:print "The blue point is in", test_point[0, :] print "After the transformation is in", test_point_pca[0, :] print "Since (X-MEAN) * PCA_MATRIX = ", np.dot(test_point - pca.mean_, pca_comp) Out:The blue point is in [[ 5 -2]] After the transformation is in [-2.34969911 -6.2575445 ] Since (X-MEAN) * PCA_MATRIX = [[-2.34969911 -6.2575445 ]   Now, let's dig into the core problem: how is it possible to generate T from the training set? It should contain orthonormal vectors, and the vectors should be ranked according the quantity of variance (that is, the energy or information carried by the observation matrix) that they can explain. Many solutions have been implemented, but the most common implementation is based on Singular Value Decomposition (SVD). SVD is a technique that decomposes any matrix M into three matrixes () with special properties and whose multiplication gives back M again: Specifically, given M, a matrix of m rows and n columns, the resulting elements of the equivalence are as follows: U is a matrix m x m (square matrix), it's unitary, and its columns form an orthonormal basis. Also, they're named left singular vectors, or input singular vectors, and they're the eigenvectors of the matrix product .  is a matrix m x n, which has only non-zero elements on its diagonal. These values are named singular values, are all non-negative, and are the eigenvalues of both  and . W is a unitary matrix n x n (square matrix), its columns form an orthonormal basis, and they're named right (or output) singular vectors. Also, they are the eigenvectors of the matrix product . Why is this needed? The solution is pretty easy: the goal of PCA is to try and estimate the directions where the variance of the input dataset is larger. For this, we first need to remove the mean from each feature and then operate on the covariance matrix . Given that, by decomposing the matrix X with SVD, we have the columns of the matrix W that are the principal components of the covariance (that is, the matrix T we are looking for), the diagonal of  that contains the variance explained by the principal components, and the columns of U the principal components. Here's why PCA is always done with SVD. Let's see it now on a real example. Let's test it on the Iris dataset, extracting the first two principal components (that is, passing from a dataset composed by four features to one composed by two): In:from sklearn import datasets iris = datasets.load_iris() X = iris.data y = iris.target print "Iris dataset contains", X.shape[1], "features" pca = PCA(n_components=2) X_pca = pca.fit_transform(X) print "After PCA, it contains", X_pca.shape[1], "features" print "The variance is [% of original]:", sum(pca.explained_variance_ratio_) plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y, edgecolors='none') plt.title('First 2 principal components of Iris dataset') plt.show() Out:Iris dataset contains 4 features After PCA, it contains 2 features The variance is [% of original]: 0.977631775025 This is the analysis of the outputs of the process: The explained variance is almost 98% of the original variance from the input. The number of features has been halved, but only 2% of the information is not in the output, hopefully just noise. From a visual inspection, it seems that the different classes, composing the Iris dataset, are separated from each other. This means that a classifier working on such a reduced set will have comparable performance in terms of accuracy, but will be faster to train and run prediction. As a proof of the second point, let's now try to train and test two classifiers, one using the original dataset and another using the reduced set, and print their accuracy: In:from sklearn.linear_model import SGDClassifier from sklearn.cross_validation import train_test_split from sklearn.metrics import accuracy_score def test_classification_accuracy(X_in, y_in): X_train, X_test, y_train, y_test = train_test_split(X_in, y_in, random_state=101, train_size=0.50) clf = SGDClassifier('log', random_state=101) clf.fit(X_train, y_train) return accuracy_score(y_test, clf.predict(X_test)) print "SGDClassifier accuracy on Iris set:", test_classification_accuracy(X, y) print "SGDClassifier accuracy on Iris set after PCA (2 compo-nents):", test_classification_accuracy(X_pca, y) Out:SGDClassifier accuracy on Iris set: 0.586666666667 SGDClassifier accuracy on Iris set after PCA (2 components): 0.72 As you can see, this technique not only reduces the complexity and space of the learner down in the chain, but also helps achieve generalization (exactly as a Ridge or Lasso regularization). Now, if you are unsure how many components should be in the output, typically as a rule of thumb, choose the minimum number that is able to explain at least 90% (or 95%) of the input variance. Empirically, such a choice usually ensures that only the noise is cut off. So far, everything seems perfect: we found a great solution to reduce the number of features, building some with very high predictive power, and we also have a rule of thumb to guess the right number of them. Let's now check how scalable this solution is: we're investigating how it scales when the number of observations and features increases. The first thing to note is that the SVD algorithm, the core piece of PCA, is not stochastic; therefore, it needs the whole matrix in order to be able to extract its principal components. Now, let's see how scalable PCA is in practice on some synthetic datasets with an increasing number of features and observations. We will perform a full (lossless) decomposition (the augment while instantiating the object PCA is None), as asking for a lower number of features doesn't impact the performance (it's just a matter of slicing the output matrixes of SVD). In the following code, we first create matrices with 10 thousand points and 20, 50, 100, 250, 1,000, and 2,500 features to be processed by PCA. Then, we create matrixes with 100 features and 1, 5, 10, 25, 50, and 100 thousands observations to be processed with PCA: In:import time def check_scalability(test_pca): pylab.rcParams['figure.figsize'] = (10, 4) # FEATURES n_points = 10000 n_features = [20, 50, 100, 250, 500, 1000, 2500] time_results = [] for n_feature in n_features: X, _ = make_blobs(n_points, n_features=n_feature, random_state=101) pca = copy.deepcopy(test_pca) tik = time.time() pca.fit(X) time_results.append(time.time()-tik) plt.subplot(1, 2, 1) plt.plot(n_features, time_results, 'o--') plt.title('Feature scalability') plt.xlabel('Num. of features') plt.ylabel('Training time [s]') # OBSERVATIONS n_features = 100 n_observations = [1000, 5000, 10000, 25000, 50000, 100000] time_results = [] for n_points in n_observations: X, _ = make_blobs(n_points, n_features=n_features, random_state=101) pca = copy.deepcopy(test_pca) tik = time.time() pca.fit(X) time_results.append(time.time()-tik) plt.subplot(1, 2, 2) plt.plot(n_observations, time_results, 'o--') plt.title('Observations scalability') plt.xlabel('Num. of training observations') plt.ylabel('Training time [s]') plt.show() check_scalability(PCA(None)) Out: As you can clearly see, PCA based on SVD is not scalable: if the number of features increases linearly, the time needed to train the algorithm increases exponentially. Also, the time needed to process a matrix with a few hundred observations becomes too high and (not shown in the image) the memory consumption makes the problem unfeasible for a domestic computer (with 16 or less GB of RAM).It seems clear that a PCA based on SVD is not the solution for big data: fortunately, in the recent years, many workarounds have been introduced. Summary In this article, we've introduced a popular unsupervised learner able to scale to cope with big data. PCA is able to reduce the number of features by creating ones containing the majority of variance (that is, the principal ones). You can also refer the following books on the similar topics: R Machine Learning Essentials: https://www.packtpub.com/big-data-and-business-intelligence/r-machine-learning-essentials R Machine Learning By Example: https://www.packtpub.com/big-data-and-business-intelligence/r-machine-learning-example Machine Learning with R - Second Edition: https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-r-second-edition Resources for Article: Further resources on this subject: Machine Learning Tasks [article] Introduction to Clustering and Unsupervised Learning [article] Clustering and Other Unsupervised Learning Methods [article]
Read more
  • 0
  • 0
  • 3359

article-image-fake-swift-enums-user-friendly-frameworks
Daniel Leping
28 Sep 2016
7 min read
Save for later

Fake Swift Enums and User-Friendly Frameworks

Daniel Leping
28 Sep 2016
7 min read
Swift Enums are a great and useful hybrid of classic enums with tuples. One of the most attractive points of consideration here is their short .something API when you want to pass one as an argument to a function. Still, they are quite limited in certain situations. OK, enough about enums themselves, because this post is about fake enums (don't take my word literally here). Why? Everybody is talking about usability, UX, and how to make the app user-friendly. This all makes sense, because at the end of the day we all are humans and perceive things with feelings. So the app just leaves the impression footprint. There are a lot of users who would prefer a more friendly app to the one providing a richer functionality. All these talks are proven by Apple and their products. But what about a developer, and what tools and bricks are we using? Right now I'm not talking about the IDEs, but rather about the frameworks and the APIs. Being an open source framework developer myself (right now I'm working on Swift Express, a web application framework in Swift, and the foundation around it), I'm concerned about the looks of the APIs we are providing. It matches one-to-one to the looks of the app, so it has the same importance. Call it Framework's UX if you would like. If the developer is pleased with the API the framework provides, he is 90% already your user. This post was particularly inspired by the Event sub-micro-framework, a part of the Reactive Swift foundation I'm creating right now to make Express solid. We took the basic idea from node.js EvenEmitter, which is very easy to use in my opinion. Though, instead of using the String Event ID approach provided by node, we wanted to use the .something approach (read above about what I think of nice APIs) and we are hoping that enums would work great, we encountered limitations with it. The hardest thing was to create a possibility to use different arguments for closures of different event types. It's all very simple with dynamic languages like JavaScript, but well, here we are type-safe. You know... The problem Usually, when creating a new framework, I try to first write an ideal API, or the one I would like to see at the very end. And here is what I had: eventEmitter.on(.stringevent) { string in print("string:", string) } eventEmitter.on(.intevent) { i in print("int:", i) } eventEmitter.on(.complexevent) { (s, i) in print("complex: string:", s, "int:", i) } This seems easy enough for the final user. What I like the most here is that different EventEmitters can have different sets of events with specific data payloads and still provide the .something enum style notation. This is easier to say than to do. With enums, you cannot have an associated type bound to a specific case. It must be bound to the entire enum, or nothing. So there is no possibility to provide a specific data payload for a particular case. But I was very clear that I want everything type-safe, so there can be no dynamic arguments. The research First of all, I began investigating if there is a possibility to use the .something notation without using enums. The first thing I recalled was the OptionSetType that is mainly used to combine the flags for C APIs. And it allows the .somthing notation. You might want to investigate this protocol as it's just cool and useful in many situations where enums are not enough. After a bit of experimenting, I found out that any class or struct having a static member of Self type can mimic an enum. Pretty much like this: struct TestFakeEnum { private init() { } static let one:TestFakeEnum = TestFakeEnum() static let two:TestFakeEnum = TestFakeEnum() static let three:TestFakeEnum = TestFakeEnum() } func funWithEnum(arg:TestFakeEnum) { } func testFakeEnum() { funWithEnum(.one) funWithEnum(.two) funWithEnum(.three) } This code will compile and run correctly. These are the basics of any fake enum. Even though the example above does not provide any particular benefit over built-in enums, it demonstrates the fundamental possibility. Getting generic Let's move on. First of all, to make our events have a specific data payload, we've added an EventProtocol (just keep in mind; it will be important later): //do not pay attention to Hashable, it's used for internal event routing mechanism. Not a subject here public protocol EventProtocol : Hashable { associatedtype Payload } To make our emitter even better we should not limit it to a restricted set of events, but rather allow the user to extend it. To achieve this, I've added a notion of EventGroup. It's not a particular type but rather an informal concept, so every event group should follow. Here is an example of an EventGroup: struct TestEventGroup<E : EventProtocol> { internal let event:E private init(_ event:E) { self.event = event } static var string:TestEventGroup<TestEventString> { return TestEventGroup<TestEventString>(.event) } static var int:TestEventGroup<TestEventInt> { return TestEventGroup<TestEventInt>(.event) } static var complex:TestEventGroup<TestEventComplex> { return TestEventGroup<TestEventComplex>(.event) } } Here is what TestEventString, TestEventInt and TestEventComplex are (real enums are used here only to have conformance with Hashable and to be a case singleton, so don't bother): //Notice, that every event here has its own Payload type enum TestEventString : EventProtocol { typealias Payload = String case event } enum TestEventInt : EventProtocol { typealias Payload = Int case event } enum TestEventComplex : EventProtocol { typealias Payload = (String, Int) case event } So to get a generic with .something notation, you have to create a generic class or struct having static members of the owner type with a particular generic param applied. Now, how can you use it? How can you discover what generic type is associated with a specific option? For that, I used the following generic function: // notice, that Payload is type-safety extracted from the associated event here with E.Payload func on<E : EventProtocol>(groupedEvent: TestEventGroup<E>, handler:E.Payload->Void) -> Listener { //implementation here is not of the subject of the article } Does this thing work? Yes. You can use the API exactly like it was outlined in the very beginning of this post: let eventEmitter = EventEmitterTest() eventEmitter.on(.string) { s in print("string:", s) } eventEmitter.on(.int) { i in print("int:", i) } eventEmitter.on(.complex) { (s, i) in print("complex: string:", s, "int:", i) } All the type inferences work. It's type safe. It's user friendly. And it is all thanks to a possibility to associate the type with the .something enum-like member. Conclusion Pity that this functionality is not available out of the box with built-in enums. For all the experiments to make this happen, I had to spend several hours. Maybe in one of the upcoming versions of Swift (3? 4.0?), Apple will let us get the type of associated values of an enum or something. But… okay. Those are dreams and out of the scope of this post. For now, we have what we have, and I'm really glad that are abele to have an associatedtype with enum-like entity, even if it's not straightforward. The examples were taken from the Event project. The complete code can be found here and it was tested with Swift 2.2 (Xcode 7.3), which is the latest at the time of writing. Thanks for reading. Use user-friendly frameworks only and enjoy your day! About the Author Daniel Leping is the CEO of Crossroad Labs. He has been working with Swift since the early beta releases and continues to do so at the Swift Express project. His main interests are reactive and functional programming with Swift, Swift-based web technologies, and bringing the best of modern techniques to the Swift world. He can be found on GitHub.
Read more
  • 0
  • 0
  • 22708

article-image-modern-natural-language-processing-part-1
Brian McMahan
28 Sep 2016
10 min read
Save for later

Modern Natural Language Processing – Part 1

Brian McMahan
28 Sep 2016
10 min read
In this three-part blog post series, I will be covering the basics of language modeling. The goal of language modeling is to capture the variability in observed linguistic data. In the simplest form, this is a matter of predicting the next word given all previous words. I am going to adopt this simple viewpoint to make explaining the basics of language modeling experiments clearer. In this series I am going to first introduce the basics of data munging—converting raw data into a processed form amenable for machine learning tasks. Then, I will cover the basics of prepping the data for a learning algorithm, including constructing a customized embedding matrix from the current state of the art embeddings (and if you don't know what embeddings are, I will cover that too). I will be going over a useful way of structuring the various components--data manager, training model, driver, and utilities—that simultaneously allows for fast implementation and flexibility for future modifications to the experiment. And finally I will cover an instance of a training model, showing how it connects up to the infrastructure outlined here, then consequently trained on the data, evaluated for performance, and used for tasks like sampling sentences. At its core, though, predicting the answer at time t+1 given time t revolves distinguishing two things: the underlying signal and the noise that makes the observed data deviate from that underlying signal. In language data, the underlying signal is intentional meaning and the noise is the many different ways people can say what they mean and the many different contexts those meanings can be embedded it. Again, I am going to simplify everything and assume a basic hypothesis: The signal can be inferred by looking at the local history and the noise can be captured as a probability distribution over potential vocabulary items. This is the central theme of the experiment I describe in this blog post series. Despite its simplicity, though, there are many bumps in the road you can hit. My intended goal with this post is to outline the basics of rolling your own infrastructure so you can deal with these bumps. Additionally, I will try to convey the various decision points and the things you should pay attention to along the way. Opinionated Implementation Philosophy Learning experiments in general are greatly facilitated by having infrastructure that allows you to experiment with different parameters, different data, different models, and any other variation that may arise. In the same vein, it's best to just get something working before you try to optimize the infrastructure for flexibility. The following division of labor is a nice middle ground I've found that allows for modularity and sanitation while being fast to implement: driver.py: Entry point for your experiment and any command line interface you'd like to have with it. igor.py: Loads parameters form config file, loads data from disk, serves data to model, and acts as interface for model. model.py: implements the training model and expects igor to have everything. utils.py: storage for all messy functions. I will discuss these implementations below. Requisite software The following packages are either highly recommended or required (in addition to their prerequisite packages): keras, spacy, sqlitedict, theano, yaml, tqdm, numpy Preprocessing Raw Data Finding data is not a difficult task, but converting it to the form you need it in to train models and run experiments can be tedious. In this section, I will cover how to go from a raw text dataset to a form which is more amenable to language modeling. There are many datasets that have already done this for you and there are many datasets that are unbelievably messy. I will work with something a bit in the middle: a dataset of president speeches that is available as raw text. ### utils.py from keras.utils.data_utils import get_file path = get_file('speech.json', origin='https://github.com/Vocativ-data/presidents_readability/raw/master/The%20original%20speeches.json') withopen(path) as fp: raw_data = json.load(fp) print("This is one of the speeches by {}:".format(raw_data['objects'][0]['President'])) print(raw_data['objects'][0]['Text']) This dataset is a JSON file that contains about 600 presidential speeches. To usefully process text data, we have to get it into the bite-sized chunks so you can identify what things are. Our goal for preprocessing is to get the individual words so that we can map them to integers and use them in machine learning algorithms. I will be using the spacy library, a state-of-the-art NLP library for Python and mainly implemented in Cython. The following code will take one of the speeches and tokenize it for us. It does a lot of other things as well, but I won't be covering those in this post. Instead, I'll just be using the library for its tokenizing ability. from spacy.en import English nlp = English(parser=True) ### nlp is a callable object---it implements most of spacy's API data1 = nlp(raw_data['objects'][0]['Text']) The variable data1 stores the speech in a format that allows for spacy to do a bunch of things. We just want the tokens, so let's pull them out: data2 = map(list, data1.sents) The code is a bit dense, but data1.sents is an iterator over the sentences in data1. Applying the list function over the iterator creates a list of the sentences, each with a list of words. Splitting the data We will also be splitting our data into train, test, and dev sets. The end goal is to have a model that generalizes. So, we train on our training data and evaluate on held-out data to see how well we generalize. However, there are hyper parameters, such as learning rate and model size, which affect how well the model does. If we were to evaluate on one set of held-out data, we would begin to select hyper parameters to fit that data, thus leaving us with no way of knowing how well our model does in practice. The test data is for this purpose. It's to give you a measuring stick into how well your model would do. You should never make modeling choices informed by the performance on the test data. ### utils.py def process_raw_data(raw_data): flat_raw = [datum['Text'] for datum in raw_data['objects']] nb_data = 10### we are only going to a test dataset len(flat_raw) all_indices = np.random.choice(np.arange(nb_data), nb_data, replace=False) train_portion, dev_portion = 0.7, 0.2 num_train, num_dev = int(train_portion*nb_data), int(dev_portion*nb_data) train_indices = all_indices[:num_train] dev_indices = all_indices[num_train:num_train+num_dev] test_indices = all_indices[num_train+num_dev:] nlp = English(parser=True) raw_train_data = [nlp(flat_raw[i]).sents for i in train_indices] raw_train_data = [[str(word).strip() for word in sentence] for speech in raw_train_data for sentence in speech] raw_dev_data = [nlp(flat_raw[i]).sents for i in dev_indices] raw_dev_data = [[str(word).strip() for word in sentence] for speech in raw_dev_data for sentence in speech] raw_test_data = [nlp(flat_raw[i]).sents for i in test_indices] raw_test_data = [[str(word).strip() for word in sentence] for speech in raw_train_data for sentence in speech] return (raw_train_data, raw_dev_data, raw_test_data), (train_indices, dev_indices, test_indices) Vocabularies In order for text data to interface with numeric algorithms, they have to be converted to unique tokens. There are many different ways of doing this, including using spacy, but we will be making our own using a Vocabulary class. Note that there is a mask being added upon initialization. It is useful to reserve the 0 integer for a token that will never appear in your data. This allows for machine learning algorithms to be clever about variable sized inputs (such as sentences). ### utils.py from collections import defaultdict class Vocabulary(object): def__init__(self, frozen=False): self.word2id = defaultdict(lambda: len(self.word2id)) self.id2word = {} self.frozen = False ### add mask self.mask_token = "<mask>" self.unk_token = "<unk>" self.mask_id = self.add(self.mask_token) self.unk_id = self.add(self.unk_token) @classmethod def from_data(cls, data, return_converted=False): this = cls() out = [] for datum in data: added = list(this.add_many(datum)) if return_converted: out.append(added) if return_converted: return this, out else: return this def convert(self, data): out = [] for datum in data: out.append(list(self.add_many(datum))) return out def add_many(self, tokens): for token in tokens: yieldself.add(token) def add(self, token): token = str(token).strip() ifself.frozen and token not in self.word2id: token = self.unk_token _id = self.word2id[str(token).strip()] self.id2word[_id] = token return _id def__getitem__(self, k): returnself.add(k) def__setitem__(self, *args): import warnings warnings.warn("not an available function") def keys(self): returnself.word2id.keys() def items(self): returnself.word2id.items() def freeze(self): self.add(self.unk_token) # just in case self.frozen = True def__contains__(self, token): return token in self.word2id def__len__(self): returnlen(self.word2id) @classmethod def load(cls, filepath): new_vocab = cls() withopen(filepath) as fp: in_data = pickle.load(fp) new_vocab.word2id.update(in_data['word2id']) new_vocab.id2word.update(in_data['id2word']) new_vocab.frozen = in_data['frozen'] return new_vocab def save(self, filepath): withopen(filepath, 'w') as fp: pickle.dump({'word2id': dict(self.word2id), 'id2word': self.id2word, 'frozen': self.frozen}, fp) The benefit of making your own Vocabulary class is that you get to have a fine-grained control over how it behaves and you get to intuitively understand how it's running. When making your vocabulary, it's vital that you don't use words that were in your dev and test sets but not in your training set. This is because you technically don't have any evidence for them, and to use them would be to put yourself as a generalization disadvantage. In other words, you wouldn't be able to trust your model's performance as much. So, we will only make our vocabulary out of training data, then freeze it, then use it to convert the rest of the data. The frozen implementation in the Vocabulary class is not as good as it could be, but it's the most illustrative. ### utils.py def format_data(raw_train_data, raw_dev_data, raw_test_data): vocab, train_data = Vocabulary.from_data(raw_train_data, True) vocab.freeze() dev_data = vocab.convert(raw_dev_data) test_data = vocab.convert(raw_test_data) return (train_data, dev_data, test_data), vocab Continue on in Part 2 to learn about igor, embeddings, serving data, and different sized sentences and masking. About the author Brian McMahan is in his final year of graduate school at Rutgers University, completing a PhD in computer science and an MS in cognitive psychology. He holds a BS in cognitive science from Minnesota State University, Mankato. At Rutgers, Brian investigates how natural language and computer vision can be brought closer together with the aim of developing interactive machines that can coordinate in the real world. His research uses machine learning models to derive flexible semantic representations of open-ended perceptual language.
Read more
  • 0
  • 0
  • 1260
article-image-build-universal-javascript-app-part-1
John Oerter
27 Sep 2016
8 min read
Save for later

Build a Universal JavaScript App, Part 1

John Oerter
27 Sep 2016
8 min read
In this two part post series, we will walk through how to write a universal (or isomorphic) JavaScript app. This first part will cover what a universal JavaScript application is, why it is such an exciting concept, and the first two steps for creating the app, which are serving post data and adding React. In Part 2 of this series we walk through steps 3-6, which are client-side routing with React Router, server rendering, data flow refactoring, and data loading of the app. What is a Universal JavaScript app? To put it simply, a universal JavaScript app is an application that can render itself on the client and the server. It combines the features of traditional server-side MVC frameworks (Rails, ASP.NET MVC, and Spring MVC), where markup is generated on the server and sent to the client, with the features of SPA frameworks (Angular, Ember, Backbone, and so on), where the server is only responsible for the data and the client generates markup. Universal or Isomorphic? There has been some debate in the JavaScript community over the terms "universal" and "isomorphic" to describe apps that can run on the client and server. I personally prefer the term "universal," simply because it's a more familiar word and makes the concept easier to understand. If you're interested in this discussion, you can read the below articles: Isomorphic JavaScript: The Future of Web Apps by Spike Brehm popularizes the term "isomorphic". Universal JavaScript by Michael Jackson puts forth the term "universal" as a better alternative. Is "Isomorphic JavaScript" a good term? by Dr. Axel Rauschmayer says that maybe certain applications should be called isomorphic and others should be called universal. What are the advantages? Switching between one language on the server and JavaScript on the client can harm your productivity. JavaScript is a unique language that, for better or worse, behaves in a very different way from most server-side languages. Writing universal JavaScript apps allows you to simplify your workflow and immerse yourself in JavaScript. If you're writing a web application today, chances are that you're writing a lot of JavaScript anyway. Why not dive in? Node continues to improve with better performance and more features thanks to V8 and it's well run community, and npm is a fantastic package manager with thousands of quality packages available. There is tremendous brain power being devoted to JavaScript right now. Take advantage of it! On top of that, maintainability of a universal app is better because it allows more code reuse. How many times have you implemented the same validation logic in your server and front end code? Or rewritten utility functions? With some careful architecture and decoupling, you can write and test code once that will work on the server and client. Performance SPAs are great because they allow the user to navigate applications without waiting for full pages to be sent down from the server. The cost, however, is longer wait times for the application to be initialized on the first load because the browser needs to receive all the assets needed to run the full app up front. What if there are rarely visited areas in your app? Why should every client have to wait for the logic and assets needed for those areas? This was the problem Netflix solved using universal JavaScript. MVC apps have the inverse problem. Each page only has the markup, assets, and JavaScript needed for that page, but the trade-off is round trips to the server for every page. SEO Another disadvantage of SPAs is their weakness on SEO. Although web crawlers are getting better at understanding JavaScript, a site generated on the server will always be superior. With universal JavaScript, any public-facing page on your site can be easily requested and indexed by search engines. Building an Example Universal JavaScript App Now that we've gained some background on universal JavaScript apps, let's walk through building a very simple blog website as an example. Here are the tools we'll use: Express React React Router Babel Webpack I've chosen these tools because of their popularity and ease of accomplishing our task. I won't be covering how to use Redux or other Flux implementations because, while useful in a production application, they are not necessary for demoing how to create a universal app. To keep things simple, we will forgo a database and just store our data in a flat file. We'll also keep the Webpack shenanigans to a minimum and only do what is necessary to transpile and bundle our code. You can grab the code for this walkthrough at here, and follow along. There are branches for each step along the way. Be sure to run npm install for each step. Let's get started! Step 1: Serving Post Data git checkout serving-post-data && npm install We're going to start off slow, and simply set up the data we want to serve. Our posts are stored in the posts.js file, and we just have a simple Express server in server.js that takes requests at /api/post/{id}. Snippets of these files are below. // posts.js module.exports = [ ... { id: 2, title: 'Expert Node', slug: 'expert-node', content: 'Street art 8-bit photo booth, aesthetic kickstarter organic raw denim hoodie non kale chips pour-over occaecat. Banjo non ea, enim assumenda forage excepteur typewriter dolore ullamco. Pickled meggings dreamcatcher ugh, church-key brooklyn portland freegan normcore meditation tacos aute chicharrones skateboard polaroid. Delectus affogato assumenda heirloom sed, do squid aute voluptate sartorial. Roof party drinking vinegar franzen mixtape meditation asymmetrical. Yuccie flexitarian est accusamus, yr 3 wolf moon aliqua mumblecore waistcoat freegan shabby chic. Irure 90's commodo, letterpress nostrud echo park cray assumenda stumptown lumbersexual magna microdosing slow-carb dreamcatcher bicycle rights. Scenester sartorial duis, pop-up etsy sed man bun art party bicycle rights delectus fixie enim. Master cleanse esse exercitation, twee pariatur venmo eu sed ethical. Plaid freegan chambray, man braid aesthetic swag exercitation godard schlitz. Esse placeat VHS knausgaard fashion axe cred. In cray selvage, waistcoat 8-bit excepteur duis schlitz. Before they sold out bicycle rights fixie excepteur, drinking vinegar normcore laboris 90's cliche aliqua 8-bit hoodie post-ironic. Seitan tattooed thundercats, kinfolk consectetur etsy veniam tofu enim pour-over narwhal hammock plaid.' }, ... ] // server.js ... app.get('/api/post/:id?', (req, res) => { const id = req.params.id if (!id) { res.send(posts) } else { const post = posts.find(p => p.id == id); if (post) res.send(post) else res.status(404).send('Not Found') } }) ... You can start the server by running node server.js, and then request all posts by going to localhost:3000/api/post or a single post by id such as localhost:3000/api/post/0. Great! Let's move on. Step 2: Add React git checkout add-react && npm install Now that we have the data exposed via a simple web service, let's use React to render a list of posts on the page. Before we get there, however, we need to set up webpack to transpile and bundle our code. Below is our simple webpack.config.js to do this: // webpack.config.js var webpack = require('webpack') module.exports = { entry: './index.js', output: { path: 'public', filename: 'bundle.js' }, module: { loaders: [ { test: /.js$/, exclude: /node_modules/, loader: 'babel-loader?presets[]=es2015&presets[]=react' } ] } } All we're doing is bundling our code with index.js as an entry point and writing the bundle to a public folder that will be served by Express. Speaking of index.js, here it is: // index.js import React from 'react' import { render } from 'react-dom' import App from './components/app' render ( <App />, document.getElementById('app') ) And finally, we have App.js: // components/App.js import React from 'react' const allPostsUrl = '/api/post' class App extends React.Component { constructor(props) { super(props) this.state = { posts: [] } } componentDidMount() { const request = new XMLHttpRequest() request.open('GET', allPostsUrl, true) request.setRequestHeader('Content-type', 'application/json'); request.onload = () => { if (request.status === 200) { this.setState({ posts: JSON.parse(request.response) }); } } request.send(); } render() { const posts = this.state.posts.map((post) => { return <li key={post.id}>{post.title}</li> }) return ( <div> <h3>Posts</h3> <ul> {posts} </ul> </div> ) } } export default App Once the App component is mounted, it sends a request for the posts, and renders them as a list. To see this step in action, build the webpack bundle first with npm run build:client. Then, you can run node server.js just like before. http://localhost:3000 will now display a list of our posts. Conclusion Now that React has been added, take a look at Part 2 where we cover client-side routing with React Router, server rendering, data flow refactoring, and data loading of the app. About the author John Oerter is a software engineer from Omaha, Nebraska, USA. He has a passion for continuous improvement and learning in all areas of software development, including Docker, JavaScript, and C#. He blogs at here.
Read more
  • 0
  • 0
  • 9257

article-image-connecting-arduino-web
Packt
27 Sep 2016
6 min read
Save for later

Connecting Arduino to the Web

Packt
27 Sep 2016
6 min read
In this article by Marco Schwartz, author of Internet of Things with Arduino Cookbook, we will focus on getting you started by connecting an Arduino board to the web. This article will really be the foundation of the rest of the article, so make sure to carefully follow the instructions so you are ready to complete the exciting projects we'll see in the rest of the article. (For more resources related to this topic, see here.) You will first learn how to set up the Arduino IDE development environment, and add Internet connectivity to your Arduino board. After that, we'll see how to connect a sensor and a relay to the Arduino board, for you to understand the basics of the Arduino platform. Then, we are actually going to connect an Arduino board to the web, and use it to grab the content from the web and to store data online. Note that all the projects in this article use the Arduino MKR1000 board. This is an Arduino board released in 2016 that has an on-board Wi-Fi connection. You can make all the projects in the article with other Arduino boards, but you might have to change parts of the code. Setting up the Arduino development environment In this first recipe of the article, we are going to see how to completely set up the Arduino IDE development environment, so that you can later use it to program your Arduino board and build Internet of Things projects. How to do it… The first thing you need to do is to download the latest version of the Arduino IDE from the following address: https://www.arduino.cc/en/Main/Software This is what you should see, and you should be able to select your operating system: You can now install the Arduino IDE, and open it on your computer. The Arduino IDE will be used through the whole article for several tasks. We will use it to write down all the code, but also to configure the Arduino boards and to read debug information back from those boards using the Arduino IDE Serial monitor. What we need to install now is the board definition for the MKR1000 board that we are going to use in this article. To do that, open the Arduino boards manager by going to Tools | Boards | Boards Manager. In there, search for SAMD boards: To install the board definition, just click on the little Install button next to the board definition. You should now be able to select the Arduino/GenuinoMKR1000 board inside the Arduino IDE: You are now completely set to develop Arduino projects using the Arduino IDE and the MKR1000 board. You can, for example, try to open an example sketch inside the IDE: How it works... The Arduino IDE is the best tool to program a wide range of boards, including the MKR1000 board that we are going to use in this article. We will see that it is a great tool to develop Internet of Things projects with Arduino. As we saw in this recipe, the board manager makes it really easy to use new boards inside the IDE. See also These are really the basics of the Arduino framework that we are going to use in the whole article to develop IoT projects. Options for Internet connectivity with Arduino Most of the boards made by Arduino don't come with Internet connectivity, which is something that we really need to build Internet of Things projects with Arduino. We are now going to review all the options that are available to us with the Arduino platform, and see which one is the best to build IoT projects. How to do it… The first option, that has been available since the advent of the Arduino platform, is to use a shield. A shield is basically an extension board that can be placed on top of the Arduino board. There are many shields available for Arduino. Inside the official collection of shields, you will find motor shields, prototyping shields, audio shields, and so on. Some shields will add Internet connectivity to the Arduino boards, for example the Ethernet shield or the Wi-Fi shield. This is a picture of the Ethernet shield: The other option is to use an external component, for example a Wi-Fi chip mounted on a breakout board, and then connect this shield to Arduino. There are many Wi-Fi chips available on the market. For example, Texas Instruments has a chip called the CC3000 that is really easy to connect to Arduino. This is a picture of a breakout board for the CC3000 Wi-Fi chip: Finally, there is the possibility of using one of the few Arduino boards that has an onboard Wi-Fi chip or Ethernet connectivity. The first board of this type introduced by Arduino was the Arduino Yun board. It is a really powerful board, with an onboard Linux machine. However, it is also a bit complex to use compared to other Arduino boards. Then, Arduino introduced the MKR1000 board, which is a board that integrates a powerful ARM Cortex M0+ process and a Wi-Fi chip on the same board, all in the small form factor. Here is a picture of this board: What to choose? All the solutions above would work to build powerful IoT projects using Arduino. However, as we want to easily build those projects and possibly integrate them into projects that are battery-powered, I chose to use the MKR1000 board for all the projects in this article. This board is really simple to use, powerful, and doesn't required any connections to hook it up with a Wi-Fi chip. Therefore, I believe this is the perfect board for IoT projects with Arduino. There's more... Of course, there are other options to connect Arduino boards to the Web. One option that's becoming more and more popular is to use 3G or LTE to connect your Arduino projects to the Web, again using either shields or breakout boards. This solution has the advantage of not requiring an active Internet connection like a Wi-Fi router, and can be used anywhere, for example outdoors. See also Now we have chosen a board that we will use in our IoT projects with Arduino, you can move on to the next recipe to actually learn how to use it. Resources for Article: Further resources on this subject: Building a Cloud Spy Camera and Creating a GPS Tracker [article] Internet Connected Smart Water Meter [article] Getting Started with Arduino [article]
Read more
  • 0
  • 0
  • 49939

article-image-getting-started-beaglebone
Packt
27 Sep 2016
36 min read
Save for later

Getting Started with BeagleBone

Packt
27 Sep 2016
36 min read
In this article by Jayakarthigeyan Prabakar, authors of BeagleBone By Example, we will discuss steps to get started with your BeagleBone board to build real-time physical computing systems using your BeagleBone board and Python programming language. This article will teach you how to set up your BeagleBone board for the first time and write your first few python codes on it. (For more resources related to this topic, see here.) By end of this article, you would have learned the basics of interfacing electronics to BeagleBone boards and coding it using Python which will allow you to build almost anything from a home automation system to a robot through examples given in this article. Firstly, in this article, you will learn how to set up your BeagleBone board for the first time with a new operating system, followed by usage of some basic Linux Shell commands that will help you out while we work on the Shell Terminal to write and execute python codes and do much more like installing different libraries and software on your BeagleBone board. Once you get familiar with usage of the Linux terminal, you will write your first code on python that will run on your BeagleBone board. Most of the time, we will using the freely available open-source codes and libraries available on the Internet to write programs on top of it and using it to make the program work for our requirement instead of entirely writing a code from scratch to build our embedded systems using BeagleBone board. The contents of this article are divided into: Prerequisites About the single board computer - BeagleBone board Know your BeagleBone board Setting up your BeagleBone board Working on Linux Shell Coding on Python in BeagleBone Black Prerequisites This topic will cover what parts you need to get started with BeagleBone Black. You can buy them online or pick them up from any electronics store that is available in your locality. The following is the list of materials needed to get started: 1x BeagleBone Black 1x miniUSB type B to type A cable 1x microSD Card (4 GB or More) 1x microSD Card Reader 1x 5V DC, 2A Power Supply 1x Ethernet Cable There are different variants of BeagleBone boards like BeagleBone, BeagleBone Black, BeagleBone Green and some more old variants. This articlewill mostly have the BeagleBone Black shown in the pictures. Note that BeagleBone Black can replace any of the other BeagleBone boards such as the BeagleBone or BeagleBone Green for most of the projects. These boards have their own extra features so to say. For example, the BeagleBone Black has more RAM, it has almost double the size of RAM available in BeagleBone and an in-built eMMC to store operating system instead of booting it up only through operating system installed on microSD card in BeagleBone White. Keeping in mind that this articleshould be able to guide people with most of the BeagleBone board variants, the tutorials in this articlewill be based on operating system booted from microSD card inserted on the BeagleBone board. We will discuss about this in detail in the Setting up your BeagleBone board and installing operating system's topics of this article. BeagleBone Black – a single board computer This topic will give you brief information about single board computers to make you understand what they are and where BeagleBone boards fit inside this category. Have you ever wondered how your smartphones, smart TVs, and set-top boxes work? All these devices run custom firmware developed for specific applications based on the different Linux and Unix kernels. When you hear the word Linux and if you are familiar with Linux, you will get in your mind that it's nothing but an operating system, just like Windows or Mac OS X that runs on desktops and server computers. But in the recent years the Linux kernel is being used in most of the embedded systems including consumer electronics such as your Smartphones, smart TVs, set-top boxes, and much more. Most people know android and iOS as an operating system on their smart phones. But only a few know that, both these operating systems are based on Linux and Unix kernels. Did you ever question how they would develop such devices? There should be a development board right? What are they? This is where Linux Development boards like our BeagleBone boards are used. By adding peripherals such as touch screens, GSM modules, microphones, and speakers to these single board computers and writing the software that is the operating system with graphical user interface to make them interact with the physical world, we have so many smart devices now that people use every day. Nowadays you have proximity sensors, accelerometers, gyroscopes, cameras, IR blasters, and much more on your smartphones. These sensors and transmitters are connected to the CPU on your phone through the Input Output ports on the CPU, and there is a small piece of software that is running to communicate with these electronics when the whole operating system is running in the Smartphone to get the readings from these sensors in real-time. Like the autorotation of screen on the latest smartphones. There is a small piece of software that is reading the data from accelerometer and gyroscope sensors on the phone and based on the orientation of the phone it turns the graphical display. So, all these Linux Development boards are tools and base boards using which you can build awesome real world smart devices or we can call them physical computing systems as they interact with the physical world and respond with an output. Getting to know your board – BeagleBone Black BeagleBone Black can be described as low cost single board computer that can be plugged into a computer monitor or TV via a HDMI cable for output and uses standard keyboard and mouse for input. It's capable of doing everything you'd expect a desktop computer to do, like playing videos, word processing, browsing the Internet, and so on. You can even setup a web server on your BeagleBone Black just like you would do if you want to set up a webserver on a Linux or Windows PC. But, differing from your desktop computer, the BeagleBone boards has the ability to interact with the physical world using the GPIO pins available on the board, and has been used in various physical computing applications. Starting from Internet of Things, Home Automation projects, to Robotics, or tweeting shark intrusion systems. The BeagleBone boards are being used by hardware makers around the world to build awesome physical computing systems which are turning into commercial products also in the market. OpenROV, an underwater robot being one good example of what someone can build using a BeagleBone Black that can turn into a successful commercial product. Hardware specification of BeagleBone Black A picture is worth a thousand words. The following picture describes about the hardware specifications of the BeagleBone Black. But you will get some more details about every part of the board as you read the content in the following picture. If you are familiar with the basic setup of a computer. You will know that it has a CPU with RAM and Hard Disk. To the CPU you can connect your Keyboard, Mouse, and Monitor which are powered up using a power system. The same setup is here in BeagleBone Black also. There is a 1GHz Processor with 512MB of DDR3 RAM and 4GB on board eMMC storage, which replaces the Hard Disk to store the operating system. Just in case you want more storage to boot up using a different operating system, you can use an external microSD card that can have the operating system that you can insert into the microSD card slot for extra storage. As in every computer, the board consists of a power button to turn on and turn off the board and a reset button to reset the board. In addition, there is a boot button which is used to boot the board when the operating system is loaded on the microSD card instead of the eMMC. We will be learning about usage of this button in detail in the installing operating systems topic of this article. There is a type A USB Host port to which you can connect peripherals such as USB Keyboard, USB Mouse, USB Camera, and much more, provided that the Linux drivers are available for the peripherals you connect to the BeagleBone Black. It is to be noted that the BeagleBone Black has only one USB Host Port, so you need to get an USB Hub to get multiple USB ports for connecting more number of USB devices at a time. I would recommend using a wireless Keyboard and Mouse to eliminate an extra USB Hub when you connect your BeagleBone Black to monitor using the HDMI port available. The microHDMI port available on the BeagleBone Black gives the board the ability to give output to HDMI monitors and HDMI TVs just like any computer will give. You can power up the BeagleBone Black using the DC Barrel jack available on the left hand side corner of the board using a 5V DC, 2A adapter. There is an option to power the board using USB, although it is not recommended due to the current limit on USB ports. There are 4 LEDs on board to indicate the status of the board and help us for identifications to boot up the BeagleBone Black from microSD card. The LEDs are linked with the GPIO pins on the BeagleBone Black which can be used whenever needed. You can connect the BeagleBone Black to the LAN or Internet using the Ethernet port available on the board using an Ethernet cable. You can even use a USB Wi-Fi module to give Internet access to your BeagleBone Black. The expansion headers which are in general called the General Purpose Input Output (GPIO) pins include 65 digital pins. These pins can be used as digital input or output pins to which you can connect switches, LEDs and many more digital input output components, 7 analog inputs to which you can connect analog sensors like a potentiometer or an analog temperature sensor, 4 Serial Ports to which you can connect Serial Bluetooth or Xbee Modules for wireless communication or anything else, 2 SPI and 2 I2C Ports to connect different modules such as sensors or any other modules using SPI or I2C communication. We also have the serial debugging port to view the low-level firmware pre-boot and post-shutdown/reboot messages via a serial monitor using an external serial to USB converter while the system is loading up and running. After booting up the operating system, this also acts as a fully interactive Linux console. Setting up your BeagleBone board Your first step to get started with BeagleBone Boards with your hands on will be to set it up and test it as suggested by the BeagleBone Community with the Debian distribution of Linux running on BeagleBone Black that comes preloaded on the eMMC on board. This section will walk you through that process followed by installing different operating system on your BeagleBone board and log in into it. And then get into start working with files and executing Linux Shell commands via SSH. Connect your BeagleBone Black using the USB cable to your Laptop or PC. This is the simplest method to get your BeagleBone Black up and running. Once you connect your BeagleBone Black, it will start to boot using the operating system on the eMMC storage. To log in into the operating system and start working on it, the BeagleBone Black has to connect to a network and the drivers that are provided by the BeagleBoard manufacturers allow us to create a local network between your BeagleBone Black and your computer when you connect it via the USB cable. For this, you need to download and install the device drivers provided by BeagleBone board makers on your PC as explained in step 2. Download and install device drivers. Goto http://beagleboard.org/getting-started Click and download the driver package based on your operating system. Mine is Windows (64-bit), so I am going to download that Once the installation is complete, click on Finish. It is shown in the following screenshot: Once the installation is done, restart your PC. Make sure that the Wi-Fi on your laptop is off and also there is no Ethernet connected to your Laptop. Because now the BeagleBone Black device drivers will try to create a LAN connection between you laptop and BeagleBone Black so that you can access the webserver running by default on the BeagleBone Black to test it's all good, up, and running. Once you reboot your PC, get to step 3. Connect to the Web Server Running on BeagleBone Black. Open your favorite web browser and enter the IP address 192.168.7.2 on the URL bar, which is the default static IP assigned to BeagleBone Black.This should open up the webpage as shown in the following screenshot: If you get a green tick mark with the message your board is connected. You can make sure that you got the previous steps correct and you have successfully connected to your board. If you don't get this message, try removing the USB cable connected to the BeagleBone Black, reconnect it and check again. If you still don't get it. Then check whether you did the first two steps correctly. Play with on board LEDs via the web server. If you Scroll down on the web page to which we got connected, you will find the section as shown in the following screenshot: This is a sample setup made by BeagleBone makers as the first time interaction interface to make you understand what is possible using BeagleBone Black. In this section of the webpage, you can run a small script. When you click on Run, the On board status LEDs that are flashing depending on the status of the operating system will stop its function and start working based on the script that you see on the page. The code is running based on a JavaScript library built by BeagleBone makers called the BoneScript. We will not look into this in detail as we will be concentrating more on writing our own programs using python to work with GPIOs on the board. But to make you understand, here is a simple explanation on what is there on the script and what happens when you click on the run button on the webpage. The pinMode function defines the on board LED pins as outputs and the digitalWrite function sets the state of the output either as HIGH or LOW. And the setTimeout function will restore the LEDs back to its normal function after the set timeout, that is, the program will stop running after the time that was set in the setTimeout function. Say I modify the code to what is shown in the following screenshot: You can notice that, I have changed the states of two LEDs to LOW and other two are HIGH and the timeout is set to 10,000 milliseconds. So when you click on the Run Button. The LEDs will switch to these states and stay like that for 10 seconds and then restore back to its normal status indication routine, that is, blinking. You can play around with different combinations of HIGH and LOW states and setTimeout values so that you can see and understand what is happening. You can see the LED output state of BeagleBone Black in the following screenshot for the program we executed earlier: You can see that the two LEDs in the middle are in LOW state. It stays like this for 10 seconds when you run the script and then it will restore back to its usual routine. You can try with different timeout values and states of LEDs on the script given in the webpage and try clicking on the run button to see how it works. Like this we will be writing our own python programs and setting up servers to use the GPIOs available on the BeagleBone Black to make them work the way we desire to build different applications in each project that is available in this article. Installing operating systems We can make the BeagleBone Black boot up and run using different operating systems just like any computer can do. Mostly Linux is used on these boards which is free and open source, but it is to be noted that specific distributions of Linux, Android, and Windows CE have been made available for these boards as well which you can try out. The stable versions of these operating systems are made available at http://beagleboard.org/latest-images. By default, the BeagleBone Black comes preloaded with a Debian distribution of Linux on the eMMC of the board. However, if you want, you can flash this eMMC just like you do to your Hard Drive on your computer and install different operating systems on it. As mentioned in the note at the beginning of this article, considering all the tutorials in this articleshould be useful to people who own BeagleBone as well as the BeagleBone Black. We will choose the recommended Debian package by www.BeagleBoard.org Foundation and we will boot the board every time using the operating system on microSD card. Perform the following steps to prepare the microSD card and boot BeagleBone using that: Goto: http://beagleboard.org/latest-images. Download the latest Debian Image. The following screenshot highlights the latest Debian Image available for flashing on microSD card: Extract the image file inside the RAR file that was downloaded: You might have to install WinRAR or any .rar file extracting software if it is not available in your computer already. Install Win32 Disk Imager software. To write the image file to a microSD card, we need this software. You can go to Google or any other search engine and type win32 disk imager as keyword and search to get the web link to download this software as shown in the following screenshot: The web link, where you can find this software is http://sourceforge.net/projects/win32diskimager/. But this keeps changing often that's why I suggest you can search it via any search engine with the keyword. Once you download the software and install it. You should be able to see the window as shown in the following screenshot when you open the Win32 Disk Imager: Now that you are all set with the software, using which you can flash the operating system image that we downloaded. Let's move to the next step where you can use Win32 Disk Imager software to flash the microSD card. Flashing the microSD card. Insert the microSD into a microSD card reader and plug it onto your computer. It might take some time for the card reader to show up your microSD card. Once it shows up, you should be able to select the USB drive as shown in the following screenshot on the Win32 Disk Imager software. Now, click on the icon highlighted in the following screenshot to open the file explorer and select the image file that we extracted in Step 3: Go to the folder where you extracted the latest Debian image file and select it. Now you can write the image file to microSD card by clicking on the Write button on the Win32 Disk Imager. If you get a prompt as shown in the following screenshot, click on Yes and continue: Once you click on Yes, the flashing process will start and the image file will be written on to the microSD card. The following screenshot shows the flashing process progressing: Once the flashing is completed, you will get a message as shown in the following screenshot: Now you can click onOKexit the Win32 Disk Imager software and safely remove the microSD card from your computer. Now you have successfully prepared your microSD card with the latest Debian operating system available for BeagleBone Black. This process is same for all other operating systems that are available for BeagleBone boards. You can try out different operating systems such as the Angstrom Linux, Android, or Windows CE others, once you get familiar with your BeagleBone board by end of this article. For Mac users, you can refer to either https://learn.adafruit.com/ssh-to-beaglebone-black-over-usb/installing-drivers-mac or https://learn.adafruit.com/beaglebone-black-installing-operating-systems/mac-os-x. Booting your BeagleBone board from a SD card Since you have the operating system on your microSD card now, let us go ahead and boot your BeagleBone board from that microSD card and see how to login and access the filesystem via Linux Shell. You will need your computer connected to your Router either via Ethernet or Wi-Fi and an Ethernet cable which you should connect between your Router and the BeagleBone board. The last but most important thing is an External Power Supply using which you will power up your BeagleBone board because power supply via a USB will be not be enough to run the BeagleBone board when it is booted from a microSD card. Insert the microSD card into BeagleBone board. Now you should insert the microSD card that you have prepared into the microSD card slot available on your BeagleBone board. Connect your BeagleBone to your LAN. Now connect your BeagleBone board to your Internet router using an Ethernet cable. You need to make sure that your BeagleBone board and your computer are connected to the same router to follow the next steps. Connect external power supply to your BeagleBone board. Boot your BeagleBone board from microSD card. On BeagleBone Black and BeagleBone Green, you have a Boot Button which you need to hold on while turning on your BeagleBone board so that it starts booting from the microSD card instead of the default mode where it starts to boot from the onboard eMMC storage which holds the operating system. In case of BeagleBone White, you don't have this button, it starts to boot from the microSD card itself as it doesn't have onboard eMMC storage. Depending on the board that you have, you can decide whether to boot the board from microSD card or eMMC. Consider you have a BeagleBone Black just like the one I have shown in the preceding picture. You hold down the User Boot button that is highlighted on the image and turn on the power supply. Once you turn on the board while holding the button down, the four on-board LEDs will light up and stay HIGH as shown in the following picture for 1 or 2 seconds, then they will start to blink randomly. Once they start blinking, you can leave the button. Now your BeagleBone board must have started Booting from the microSD card, so our next step will be to log in to the system and start working on it. The next topic will walk you through the steps on how to do this. Logging into the board via SSH over Ethernet If you are familiar with Linux operations, then you might have guessed what this section is about. But for those people who are not daily Linux users or have never heard the term SSH, Secure Shell (SSH) is a network protocol that allows network services and remote login to be able to operate over an unsecured network in a secure manner. In basic terms, it's a protocol through which you can log in to a computer and assess its filesystem and also work on that system using specific commands to create and work with files on the system. In the steps ahead, you will work with some Linux commands that will make you understand this method of logging into a system and working on it. Setup SSH Software. To get started, log in to your BeagleBone board now, from a Windows PC, you need to install any SSH terminal software for windows. My favorite is PuTTY, so I will be using that in the steps ahead. If you are new to using SSH, I would suggest you also get PuTTY. The software interface of PuTTY will be as shown in the following screenshot: You need to know the IP address or the Host Name of your BeagleBone Black to log in to it via SSH. The default Host Name is beaglebone but in some routers, depending on their security settings, this method of login doesn't work with Host Name. So, I would suggest you try to login entering the hostname first. If you are not able to login, follow Step 2. If you successfully connect and get the login prompt with Host Name, you can skip Step 2 and go to Step 3. But if you get an error as shown in the following screenshot, perform Step 2. Find an IP address assigned to BeagleBone board. Whenever you connect a device to your Router, say your computer, printer, or mobile phone. The router assigns a unique IP to these devices. The same way, the router must have assigned an IP to your BeagleBone board also. We can get this detail on the router's configuration page of your router from any browser of a computer that is connected to that router. In most cases, the router can be assessed by entering the IP 192.168.1.1 but some router manufacturers have a different IP in very rare cases. If you are not able to assess your router using this IP 192.168.1.1, refer your router manual for getting access to this page. The images that are shown in this section are to give you an idea about how to log in to your router and get the IP address details assigned to your BeagleBone board from your router. The configuration pages and how the devices are shown on the router will look different depending on the router that you own. Enter the 192.168.1.1 address in you browser. When it asks for User Name and Password, enter admin as UserName and password as Password These are the mostly used credentials by default in most of the routers. Just in case you fail in this step, check your router's user manual. Considering you logged into your router configuration page successfully, you will see the screen with details as shown in the following screenshot: If you click on the Highlighted part, Attached Devices, you will be able to see the list of devices with their IP as shown in the following screenshot, where you can find the details of the IP address that is assigned to your BeagleBone board. So now you can note down the IP that has been assigned to your BeagleBone board. It can be seen that it's 192.168.1.14 in the preceding screenshot for my beaglebone board. We will be using this IP address to connect to the BeagleBone board via SSH in the next step. Connect via SSH using IP Address. Once you click on Open you might get a security prompt as shown in the following screenshot. Click on Yes and continue. Now you will get the login prompt on the terminal screen as shown in the following screenshot: If you got this far successfully, then it is time to log in to your BeagleBone board and start working on it via Linux Shell. Log in to BeagleBone board. When you get the login prompt as shown in the preceding screenshot, you need to enter the default username which is debian and default password which is temppwd. Now you should have logged into Linux Shell of your BeagleBone board as user with username debian. Now that you have successfully logged into your BeagleBone board's Linux Shell, you can start working on it using Linux Shell commands like anyone does on any computer that is running Linux. The next section will walk you through some basic Linux Shell commands that will come in handy for you to work with any Linux system. Working on Linux shell Simply put, the shell is a program that takes commands from the keyboard and gives them to the operating system to perform. Since we will be working on BeagleBone board as a development board to build electronics projects, plugging it to a Monitor, Keyboard, and Mouse every time to work on it like a computer might be unwanted most of the times and you might need more resources also which is unnecessary all the time. So we will be using the shell command-line interface to work on the BeagleBone boards. If you want to learn more about the Linux command-line interfaces, I would suggest you visit to http://linuxcommand.org/. Now let's go ahead and try some basic shell commands and do something on your BeagleBone board. You can see the kernel version using the command uname -r. Just type the command and hit enter on your keyboard, the command will get executed and you will see the output as shown here: Next, let us check the date on your BeagleBone board: Like this shell will execute your commands and you can work on your BeagleBone boards via the shell. Getting kernel version and date was just for a sample test. Now let's move ahead and start working with the filesystem. ls: This stands for list command. This command will list out and display the names of folders and files available in the current working directory on which you are working. pwd: This stands for print working directory command. This command prints the current working directory in which you are present. mkdir: This stands for make directory command. This command will create a directory in other words a folder, but you need to mention the name of the directory you want to create in the command followed by it. Say I want to create a folder with the name WorkSpace, I should enter the command as follows: When you execute this command, it will create a folder named WorkSpace inside the current working directory you are in, to check whether the directory was created. You can try the ls command again and see that the directory named WorkSpace has been created. To change the working directory and go inside the WorkSpace directory, you can use the next command that we will be seeing. cd: This stands for change directory command. This command will help you switch between directories depending on the path you provide along with this command. Now to switch and get inside the WorkSpace directory that you created, you can type the command as follows: cd WorkSpace You can note that whenever you type a command, it executes in the current working that you are in. So the execution of cd WorkSpace now will be equivalent to cd /home/debian/WorkSpace as your current working directory is /home/debian. Now you can see that you have got inside the WorkSpace folder, which is empty right now, so if you type the ls command now, it will just go to the next line on the shell terminal, it will not output anything as the folder is empty. Now if you execute the pwd command, you will see that your current working directory has changed. cat: This stands for the cat command. This is one of the most basic commands that is used to read, write, and append data to files in shell. To create a text file and add some content to it, you just need to type the cat command cat > filename.txt Say I want to create a sample.txt file, I would type the command as shown next: Once you type, the cursor will be waiting for the text you want to type inside the text file you created. Now you can type whatever text you want to type and when you are done press Ctrl + D. It will save the file and get back to the command-line interface. Say I typed This is a test and then pressed Ctrl + D. The shell will look as shown next. Now if you type ls command, you can see the text file inside the WorkSpace directory. If you want to read the contents of the sample.txt file, again you can use the cat command as follows: Alternatively, you can even use the more command which we will be using mostly: Now that we saw how we can create a file, let's see how to delete what we created. rm: This stands for remove command. This will let you delete any file by typing the filename or filename along with path followed by the command. Say now we want to delete the sample.txt file we created, the command can be either rm sample.txt which will be equivalent to rm /home/debian/WorkSpace/sample.txt as your current working directory is /home/debian/Workspace. After you execute this command, if you try to list the contents of the directory, you will notice that the file has been deleted and now the directory is empty. Like this, you can make use of the shell commands work on your BeagleBone board via SSH over Ethernet or Wi-Fi. Now that you have got a clear idea and hands-on experience on using the Linux Shell, let's go ahead and start working with python and write a sample program on a text editor on Linux and test it in the next and last topic of this article. Writing your own Python program on BeagleBone board In this section, we will write our first few Python codes on your BeagleBone board. That will take an input and make a decision based on the input and print out an output depending on the code that we write. There are three sample codes in this topic as examples, which will help you cover some fundamentals of any programming language, including defining variables, using arithmetic operators, taking input and printing output, loops and decision making algorithm. Before we write and execute a python program, let us get into python's interactive shell interface via the Linux shell and try some basic things like creating a variable and performing math operations on those variables. To open the python shell interface, you just have the type python on the Linux shell like you did for any Linux shell command in the previous section of this article. Once you type python and hit Enter, you should be able to see the terminal as shown in the following screenshot: Now you are into python's interactive shell interface where every line that you type is the code that you are writing and executing simultaneously in every step. To learn more about this, visit https://www.python.org/shell/ or to get started and learn python programming language you can get our python by example articlein our publication. Let's execute a series of syntax in python's interactive shell interface to see whether it's working. Let's create a variable A and assign value 20 to it: Now let's print A to check what value it is assigned: You can see that it prints out the value that we stored on it. Now let's create another variable named B and store value 30 to it: Let's try adding these two variables and store the result in another variable named C. Then print C where you can see the result of A+B, that is, 50. That is the very basic calculation we performed on a programming interface. We created two variables with different values and then performed an arithmetic operation of adding two values on those variables and printed out the result. Now, let's get a little more ahead and store string of characters in a variable and print them. Wasn't that simple. Like this you can play around with python's interactive shell interface to learn coding. But any programmer would like to write a code and execute the program to get the output on a single command right. Let's see how that can be done now. To get out of the Python's Interactive Shell and get back to the current working directory on Linux Shell, just hold the Ctrl button and press D, that is, Ctrl + D on the keyboard. You will be back on the Linux Shell interface as shown next: Now let's go ahead and write the program to perform the same action that we tried executing on python's interactive shell. That is to store two values on different variables and print out the result when both of them are added. Let's add some spice to it by doing multiple arithmetic operations on the variables that we create and print out the values of addition and subtraction. You will need a text editor to write programs and save them. You can do it using the cat command also. But in future when you use indentation and more editing on the programs, the basic cat command usage will be difficult. So, let's start using the available text editor on Linux named nano, which is one of my favorite text editors in Linux. If you have a different choice and you are familiar with other text editors on Linux, such as vi or anything else, you can go ahead and continue the next steps using them to write programs. To create a python file and start writing the code on it using nano, you need to use the nano command followed by the filename with extension .py. Let's create a file named ArithmeticOperations.py. Once you type this command, the text editor will open up. Here you can type your code and save it using the keyboard command Ctrl + X. Let's go ahead and write the code which is shown in the following screenshot and let's save the file using Ctrl + X: Then type Y when it prompts to save the modified file. Then if you want to change the file with a different name, you can change it in the next step before you save it. Since we created the file now only, we don't need to do it. In case if you want to save the modified copy of the file in future with a different name, you can change the filename in this step: For now we will just hit enter that will take us back to the Linux Shell and the file AritmeticOperations.py will be created inside the current working directory, which you can see by typing the ls command. You can also see the contents of the file by typing the more command that we learned in the previous section of this article. Now let's execute the python script and see the output. To do this, you just have to type the command python followed by the python program file that we created, that is, the ArithmeticOperations.py. Once you run the python code that you wrote, you will see the output as shown earlier with the results as output. Now that you have written and executed your first code on python and tested it working on your BeagleBone board, let's write another python code, which is shown in the following screenshot where the code will ask you to enter an input and whatever text you type as input will be printed on the next line and the program will run continuously. Let's save this python code as InputPrinter.py: In this code, we will use a while loop so that the program runs continuously until you break it using the Ctrl + D command where it will break the program with an error and get back to Linux Shell. Now let's try out our third and last program of this section and article, where when we run the code, the program asks the user to type the user's name as input and if they type a specific name that we compare, it prints and says Hello message or if a different name was given as input, it prints go away; let's call this code Say_Hello_To_Boss.py. Instead of my name Jayakarthigeyan, you can replace it with your name or any string of characters on comparing which, the output decision varies. When you execute the code, the output will look as shown in the following screenshot: Like we did in the previous program, you can hit Ctrl + D to stop the program and get back to Linux Shell. In this way, you can work with python programming language to create codes that can run on the BeagleBone boards in the way you desire. Since we have come to the end of this article, let's give a break to our BeagleBone board. Let's power off our BeagleBone board using the command sudo poweroff, which will shut down the operating system. After you execute this command, if you get the error message shown in the following screenshot, it means the BeagleBoard has powered off. You can turn off the power supply that was connected to your BeagleBone board now. Summary So here we are at the end of this article. In this article, you have learned how to boot your BeagleBone board from a different operating system on microSD card and then log in to it and start coding in Python to run a routine and make decisions On an extra note, you can take this article to another level by trying out a little more by connecting your BeagleBone board to an HDMI monitor using a microHDMI cable and connecting a USB Keyboard and Mouse to the USB Host of the BeagleBone board and power the monitor and BeagleBone board using external power supply and boot it from microSD Card and you should be able to see some GUI and you will be able to use the BeagleBone board like a normal Linux computer. You will be able access the files, manage them, and use Shell Terminal on the GUI also. If you own BeagleBone Black or BeagleBone Green, you can try out to flash the onboard eMMC using the latest Debian operating system and try out the same thing that we did using the operating system booted from microSD card. Resources for Article: Further resources on this subject: Learning BeagleBone [article] Learning BeagleBone Python Programming [article] Protecting GPG Keys in BeagleBone [article]
Read more
  • 0
  • 1
  • 17309
article-image-deep-learning-image-generation-getting-started-generative-adversarial-networks
Mohammad Pezeshki
27 Sep 2016
5 min read
Save for later

Deep Learning and Image generation: Get Started with Generative Adversarial Networks

Mohammad Pezeshki
27 Sep 2016
5 min read
In machine learning, a generative model is one that captures the observable data distribution. The objective of deep neural generative models is to disentangle different factors of variation in data and be able to generate new or similar-looking samples of the data. For example, an ideal generative model on face images disentangles all different factors of variation such as illumination, pose, gender, skin color, and so on, and is also able to generate a new face by the combination of those factors in a very non-linear way. Figure 1 shows a trained generative model that has learned different factors, including pose and the degree of smiling. On the x-axis, as we go to the right, the pose changes and on y-axis as we move upwards, smiles turn to frowns. Usually these factors are orthogonal to one another, meaning that changing one while keeping the others fixed leads to a single change in data space; e.g. in the first row of Figure 1, only the pose changes with no change in the degree of smiling. The figure is adapted from here.   Based on the assumption that these underlying factors of variation have a very simple distribution (unlike the data itself), to generate a new face, we can simply sample a random number from the assumed simple distribution (such as a Gaussian). In other words, if there are k different factors, we randomly sample from a k-dimensional Gaussian distribution (aka noise). In this post, we will take a look at one of the recent models in the area of deep learning and generative models, called generative adversarial network. This model can be seen as a game between two agents: the Generator and the Discriminator. The generator generates images from noise and the discriminator discriminates between real images and those images which are generated by the generator. The objective is then to train the model such that while the discriminator tries to distinguish generated images from real images, the generator tries to fool the discriminator.  To train the model, we need to define a cost. In the case of GAN, the errors made by the discriminator are considered as the cost. Consequently, the objective of the discriminator is to minimize the cost, while the objective for the generator is to fool the discriminator by maximizing the cost. A graphical illustration of the model is shown in Figure 2.   Formally, we define the discriminator as a binary classiffier D : Rm ! f0; 1g and the generator as the mapping G : Rk ! Rm in which k is the dimension of the latent space that represents all of the factors of variation. Denoting the data by x and a point in latent space by z, the model can be trained by playing the following minmax game:   Note that the rst term encourages the discriminator to discriminate between generated images and real ones, while the second term encourages the generator to come up with images that would fool the discriminator. In practice, the log in the second term can be saturated, which would hurt the row of the gradient. As a result, the cost may be reformulated equivalently as:   At the time of generation, we can sample from a simple k-dimensional Gaussian distribution with zero mean and unit variance and pass it onto the generator. Among different models that can be used as the discriminator and generator, we use deep neural networks with parameters D and G for the discriminator and generator, respectively. Since the training boils down to updating the parameters using the backpropagation algorithm, the update rule is defined as follows: If we use a convolutional network as the discriminator and another convolutional network with fractionally strided convolution layers as the generator, the model is called DCGAN (Deep Convolutional Generative Adversarial Network). Some samples of bedroom im-age generation from this model are shown in Figure 3.   The generator can also be a sequential model, meaning that it can generate an image using a sequence of images with lower-resolution or details. A few examples of the generated images using such a model are shown in Figure 4. The GAN and later variants such as the DCGAN are currently considered to be among the best when it comes to the quality of the generated samples. The images look so realistic that you might assume that the model has simply memorized instances of the training set, but a quick KNN search reveals this not to be the case. About the author Mohammad Pezeshk is a master’s student in the LISA lab of Universite de Montreal working under the supervision of Yoshua Bengio and Aaron Courville. He obtained his bachelor's in computer engineering from Amirkabir University of Technology (Tehran Polytechnic) in July 2014 and then started his master’s in September 2014. His research interests lie in the fields of artificial intelligence, machine learning, probabilistic models and specifically deep learning.
Read more
  • 0
  • 0
  • 4389

article-image-how-to-add-frameworks-with-carthage
Fabrizio Brancati
27 Sep 2016
5 min read
Save for later

How to Add Frameworks to iOS Applications with Carthage

Fabrizio Brancati
27 Sep 2016
5 min read
With the advent of iOS 8, Apple allowed the option of creating dynamic frameworks. In this post, you will learn how to create a dynamic framework from the ground up, and you will use Carthage to add frameworks to your Apps. Let’s get started! Creating Xcode project Open Xcode and create a new project. Select Frameworks & Library under the iOS menù from the templates and then Cocoa Touch Framework. Type a name for your framework and select Swift for the language. Now we will create a framework that helps to store data using NSUserDefaults. We can name it DataStore, which is a generic name, in case we want to expand it in the future to allow for the use of other data stores such as CoreData. The project is now empty and you have to add your first class, so add a new Swift file and name it DataStore, like the framework name. You need to create the class: public enum DataStoreType { case UserDefaults } public class DataStore { private init() {} public static func save(data: AnyObject, forKey key: String, in store: DataStoreType) { switch store { case .UserDefaults: NSUserDefaults.standardUserDefaults().setObject(data, forKey: key) } } public static func read(forKey key: String, in store: DataStoreType) -> AnyObject? { switch store { case .UserDefaults: return NSUserDefaults.standardUserDefaults().objectForKey(key) } } public static func delete(forKey key: String, in store: DataStoreType) { switch store { case .UserDefaults: NSUserDefaults.standardUserDefaults().removeObjectForKey(key) } } } Here we have created a DataStoreType enum to allow the expand feature in the future, and the DataStore class with the functions to save, read and delete. That’s it! You have just created the framework! How to use the framework To use the created framework, build it with CMD + B, right-click on the framework in the Products folder in the Xcode project, and click on Show in Finder. To use it you must drag and dropbox this file in your project. In this case, we will create an example project to show you how to do it. Add the framework to your App project by adding it in the Embedded Binaries section in the General page of the Xcode project. Note that if you see it duplicated in the Linked Frameworks and Libraries section, you can remove the first one. You have just included your framework in the App. Now we have to use it, so import it (I will import it in the ViewController class for test purposes, but you can include it whenever you want). And let’s use the DataStore framework by saving and reading a String from the NSUserDefaults. This is the code: import UIKit import DataStore class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view, typically from a nib. DataStore.save("Test", forKey: "Test", in: .UserDefaults) print(DataStore.read(forKey: "Test", in: .UserDefaults)!) } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } } Build the App and see the framework do its work! You should see this in the Xcode console: Test Now you have created a framework in Swift and you have used it with an App! Note that the framework created for the iOS Simulator is different from the one created for a device, because is built for a different architetture. To build a universal framework, you can use Carthage, which is shown in the next section. Using Carthage Carthage is a decentralized dependency manager that builds your dependencies and provides you with binary frameworks. To install it you can download the Carthage.pkg file from GitHub or with Homebrew: brew update brew install carthage Because Carthage is only able to build a framework from Git, we will use Alamofire, a popular HTTP networking library available on GitHub. Oper the project folder and create a file named Cartfile. Here is where we will tell Carthage what it has to build and in what version: github “Alamofire/Alamofire” We don’t specify a version because this is only a test, but it’s good practice. Here you can see an example, but opening the Terminal App, going into the project folder, and typing: carthage update You should see Carthage do some things, but when it has finished, with Finder go to project folder, then Carthage, Build, iOS and there is where the framework is[VP1] . To add it to the App, we have to do more work than what we have done before. Drag and drop the framework from the Carthage/Build/iOS folder in the Linked Frameworks and Libraries section on the General setting tab of the Xcode project. On the Build Phases tab, click on the + icon and choose New Run Script Phase with the following script: /usr/local/bin/carthage copy-frameworks Now you can add the paths of the frameworks under Input Files, which in this case is: $(SRCROOT)/FrameworkTest/Carthage/Build/iOS/Alamofire.framework This script works around an App Store submission bug triggered by universal binaries and ensures that the necessary bitcode-related files and dSYMs are copied when archiving. Now you only have to import the frameworks in your Swift file and use it like we did earlier in this post! Summary In this post, you learned how to create a custom framework for creating shared code between your apps, along with the creation of a GitHub repository to share your open source framework with the community of developers. You also learned how to use Carthage for your GitHub repository, or with a popular framework like Alamofire, and how to import it in your Apps. About The Author Fabrizio Brancati is a mobile app developer and web developer currently working and living in Milan, Italy, with a passion for innovation and discover new things. He develops with Objective-C for iOS 3 and iPod touch. When Swift came out, he learned it and was so excited that he remade an Objective-C framework available on GitHub in Swift (BFKit / BFKit-Swift). Software development is his driving passion, and he loves when others make use of his software.
Read more
  • 0
  • 0
  • 15637
Modal Close icon
Modal Close icon