Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-parallel-computing
Packt
30 Sep 2016
9 min read
Save for later

Parallel Computing

Packt
30 Sep 2016
9 min read
In this article written by Jalem Raj Rohit, author of the book Julia Cookbook, cover the following recipes: Basic concepts of parallel computing Data movement Parallel map and loop operations Channels (For more resources related to this topic, see here.) Introduction In this article, you will learn about performing parallel computing and using it to handle big data. So, some concepts like data movements, sharded arrays, and the map-reduce framework are important to know in order to handle large amounts of data by computing on it using parallelized CPUs. So, all the concepts discussed in this article will help you build good parallel computing and multiprocessing basics, including efficient data handling and code optimization. Basic concepts of parallel computing Parallel computing is a way of dealing with data in a parallel way. This can be done by connecting multiple computers as a cluster and using their CPUs for carrying out the computations. This style of computation is used when handling large amounts of data and also while running complex algorithms over significantly large data. The computations are executed faster due to the availability of multiple CPUs running them in parallel as well as the direct availability of RAM to each of them. Getting ready Julia has an in-built support for parallel computing and multiprocessing. So, these computations rarely require any external libraries for the task. How to do it… Julia can be started in your local computer using multiple cores of your CPU. So, we will now have multiple workers for the process. This is how you can fire up Julia in the multi-processing mode in your terminal. This creates two worker process in the machine, which means it uses twwo CPU cores for the purpose julia -p 2 The output looks something like this. It might differ for different operating systems and different machines: Now, we will look at the remotecall() function. It takes in multiple arguments, the first one being the process which we want to assign the task to. The next argument would be the function which we want to execute. The subsequent arguments would be the parameters or the arguments of that function which we want to execute. In this example, we will create a 2 x 2 random matrix and assign it to the process number 2. This can be done as follows: task = remotecall(2, rand, 2, 2) The preceding command gives the following output: Now that the remotecall() function for remote referencing has been executed, we will fetch the results of the function through the fetch() function. This can be done as follows: fetch(task) The preceding command gives the following output: Now, to perform some mathematical operations on the generated matrix, we can use the @spawnat macro, which takes in the mathematical operation and the fetch() function. The @spawnat macro actually wraps the expression 5 .+ fetch(task) into an anonymous function and runs it on the second machine This can be done as follows: task2 = @spawnat 5 .+ fetch(task) There is also a function that eliminates the need of using two different functions: remotecall() and fetch(). The remotecall_fetch() function takes in multiple arguments. The first one being the process that the task is being assigned. The next argument is the function which you want to be executed. The subsequent arguments would be the arguments or the parameters of the function that you want to execute. Now, we will use the remote call_fetch() function to fetch an element of the task matrix for a particular index. This can be done as follows: remotecall_fetch(2, getindex, task2, 1, 1) How it works… Julia can be started in the multiprocessing mode by specifying the number of processes needed while starting up the REPL. In this example, we started Julia as a two process mode. The maximum number of processes depends on the number of cores available in the CPU. The remotecall() function helps in selecting a particular process from the running processes in order to run a function or, in fact, any computation for us. The fetch() function is used to fetch the results of the remotecall() function from a common data resource (or the process) for all the running processes. The details of the data source would be covered in the later sections. The results of the fetch() function can also be used for further computations, which can be carried out with the @spawnat macro along with the results of fetch(). This would assign a process for the computation. The remotecall_fetch() function further eliminates the need for the fetch function in case of a direct execution. This has both the remotecall() and fetch() operations built into it. So, it acts as a combination of both the second and third points in this section. Data movement In parallel computing, data movements are quite common and are also a thing to be minimized due to the time and the network overhead due to the movements. In this recipe, we will see how that can be optimized to avoid latency as much as we can. Getting ready To get ready for this recipe, you need to have the Julia REPL started in the multiprocessing mode. This is explained in the Getting ready section of the preceding recipe. How to do it… Firstly, we will see how to do a matrix computation using the @spawn macro, which helps in data movement. So, we construct a matrix of shape 200 x 200 and then try to square it using the @spawn macro. This can be done as follows: mat = rand(200, 200) exec_mat = @spawn mat^2 fetch(exec_mat) The preceding command gives the following output: Now, we will look at an another way to achieve the same. This time, we will use the @spawn macro directly instead of the initialization step. We will discuss the advantages and drawbacks of each method in the How it works… section. So, this can be done as follows: mat = @spawn rand(200, 200)^2 fetch(mat) The preceding command gives the following output: How it works… In this example, we try to construct a 200X200 matrix and then used the @spawn macro to spawn a process in the CPU to execute the same for us. The @spawn macro spawns one of the two processes running, and it uses one of them for the computation. In the second example, you learned how to use the @spawn macro directly without an extra initialization part. The fetch() function helps us fetch the results from a common data resource of the processes. More on this will be covered in the following recipes. Parallel maps and loop operations In this recipe, you will learn a bit about the famous Map Reduce framework and why it is one of the most important ideas in the domains of big data and parallel computing. You will learn how to parallelize loops and use reducing functions on them through the several CPUs and machines and the concept of parallel computing, which you learned about in the previous recipes. Getting ready Just like the previous sections, Julia just needs to be running in the multiprocessing mode to follow along the following examples. This can be done through the instructions given in the first section. How to do it… Firstly, we will write a function that takes and adds n random bits. The writing of this function has nothing to do with multiprocessing. So, it has simple Julia functions and loops. This function can be written as follows: Now, we will use the @spawn macro, which we learned previously to run the count_heads() function as separate processes. The count_heads()function needs to be in the same directory for this to work. This can be done as follows: require("count_heads") a = @spawn count_heads(100) b = @spawn count_heads(100) fetch(a) + fetch(b) However, we can use the concept of multi-processing and parallelize the loop directly as well as take the sum. The parallelizing part is called mapping, and the addition of the parallelized bits is called reduction. Thus, the process constitutes the famous Map-Reduce framework. This can be made possible using the @parallel macro, as follows: nheads = @parallel (+) for i = 1:200 Int(rand(Bool)) end How it works… The first function is a simple Julia function that adds random bits with every loop iteration. It was created just for the demonstration of Map-Reduce operations. In the second point, we spawn two separate processes for executing the function and then fetch the results of both of them and add them up. However, that is not really a neat way to carry out parallel computation of functions and loops. Instead, the @parallel macro provides a better way to do it, which allows the user to parallelize the loop and then reduce the computations through an operator, which together would be called the Map-Reduce operation. Channels Channels are like the background plumbing for parallel computing in Julia. They are like the reservoirs from where the individual processes access their data from. Getting ready The requisite is similar to the previous sections. This is mostly a theoretical section, so you just need to run your experiments on your own. For that, you need to run your Julia REPL in a multiprocessing mode. How to do it… Channels are shared queues with a fixed length. They are common data reservoirs for the processes which are running. The channels are like common data resources, which multiple readers or workers can access. They can access the data through the fetch() function, which we already discussed in the previous sections. The workers can also write to the channel through the put!() function. This means that the workers can add more data to the resource, which can be accessed by all the workers running a particular computation. Closing a channel after usage is a good practice to avoid data corruption and unnecessary memory usage. It can be done using the close() function. Summary In this article we covered the basic concepts of parallel computing and data movement that takes place in the network. We also learned about parallel maps and loop operations along with the famous Map Reduce framework. At the end we got a brief understanding of channels and how individual processes access their data from channels. Resources for Article: Further resources on this subject: More about Julia [article] Basics of Programming in Julia [article] Simplifying Parallelism Complexity in C# [article]
Read more
  • 0
  • 0
  • 3370

article-image-introduction-neural-networks-chainer-part-1
Hiroyuki Vincent
30 Sep 2016
8 min read
Save for later

Introduction to Neural Networks with Chainer – Part 1

Hiroyuki Vincent
30 Sep 2016
8 min read
With the increasing popularity of neural networks, or deep learning, companies ranging from smaller start-ups to major ones such as Google have been releasing frameworks for deep learning related tasks. Caffe from the Berkeley Vision and Learning Center (BVLC), Torch and Theano has been around for quite a while. TensorFlow was open sourced by Google last year in 2015 and has since then been expanding its community. Neon from Nervana Systems is a more recent addition to this repository with good reputation for its performance. This three-part post series will introduce you to yet another framework for neural networks called Chainer, which similarly to most of the previously mentioned frameworks is based on Python. It has an intuitive interface with a low learning curve but hasn't been widely adopted yet outside the borders of Japan where it is being developed. This article is split up into three parts where this first part aims to explain the characteristics of Chainer and the basic data structures. The second and third parts will help you get started with the actual training. The basic theory of neural networks will not be covered but if you are familiar with forward pass, back propagation and gradient descent and on top of that have some coding experience, you should be be able to follow this article. What is Chainer? Chainer is an open sourced Python based framework maintained by Preferred Infrastructure/Preferred Networks in Japan. The company behind the framework put a heavy emphasis on closing the gap between the machine learning research being carried out in academia and the more practical applications of machine learning. They focus on deep learning, IoT, edge-heavy computing with applications in the automobile manufacturing and healthcare markets by for instance developing autonomous cars and factory robots with grasping capabilities. Why Chainer? Defining and training simple neural networks with Chainer can be done with just few lines of code. It can also scale to larger models with more complex architecture with little effort. It is a framework for basically anyone, working, studying or researching in neural networks. There are however other alternatives as mentioned in the introduction. This section will explain the characteristics of the framework and why you might want to try it out. One major issue with deep learning related tasks is configuring the hyper parameters. Chainer makes this less of a pain. It comes with many layers, activation functions, loss functions and optimization algorithms in a plug-and-play fashion. With a single line of code or a single function call, those components can be added or removed without affecting the rest of the program. The abstraction and class structure of the framework makes it intuitive to learn and start experimenting. We will dig deeper into that in the second part of this series. On top of that, it is well documented. It is actually so well documented that you may stop reading this article right here and jump to the official documentation. Much of the content in this post is extracted from the official documentation, but I will try to complement it with additional details and awareness for common pitfalls. GPU Support NumPy is a Python package commonly used in academia due to its rich interface for manipulating multidimensional arrays similar to MATLAB. If you are working with neural networks, chances are, you're well familiar with NumPy and its methods and operations. What Chainer does is that it comes with CuPy (chainer.cuda.cupy), a GPU alternative to NumPy. CuPy is a CUDA-based GPU backend for array manipulation that implements a subset of the NumPy interface. Hence, it is possible to write almost generic code for both the CPU and the GPU. You can simply change the NumPy package to CuPy and vice versa in your source code to switch from one to another. It unfortunately lacks some features such as advanced indexing and numpy.where. Multi-GPU training in terms of model parallelism and data parallelism is also supported, although not in the scope of this series. As we will discover, the most fundamental data structure in Chainer is a NumPy (or CuPy) array wrapper with added functionality. The Basics of Chainer Let’s write some code to get familiar with the Chainer interface. First, make sure that you have a working Python environment. We will use pip to install Chainer even though it can be installed directly from source. The code included in this post is verified with Python 3.5.1 and Chainer 1.8.2. Installation Install NumPy and Chainer using pip, which comes with the Python environment. pip install numpy pip install chainer Chainer Variables and Variable Differentiation You are now ready to start writing code. First, let's take a look at the snippet below. import numpy as np from chainer import Variable x_data = np.array([4], dtype=np.float32) x = Variable(x_data) assert x.data == x_data # True y = x ** 2 + 5 * x + 3 assertisinstance(y, Variable) # True # Compute the gradient for x and store it in x.grad y.backward() # y'(x) = 2 * x + 5; y'(4) = 13 assert x.grad == 13# True The most fundamental data structure is the Chainer variable class, chainer.Variable. It can be initialized by passing a NumPy array (which must have the datatype numpy.float32) or by applying functions to already instantiated Chainer variables. Think of them as wrappers of NumPy's N-dimensional arrays. To access the NumPy array from the variable, use the data property. So what makes them different? Each Chainer variable actually holds a reference to it's creator, unless it is a leaf node, x in the example above. This means that y can reference x through the functions that created it. That way, a computational graph is maintained by the framework which is used for computing the gradients during back propagation. This is exactly what happens when calling the Variable.backward() method on y. The variable is differentiated and the gradient with respect to x is stored in the x variable itself. A pitfall here is that if x would be an array with more than one element, y.grad would need to be initialized with an initial error. If, as in the above mentioned example, x only contains one element, the error is automatically set to 1. Principles of Gradient Descent Using the differentiation mechanism above, you may implement a gradient descent optimization algorithm the following way. Initialize a weight w, a one dimensional array with one element to any value, say 4 as in the previous example, and iteratively optimizes the loss function w ** 2. The loss function is a function depending on the parameter w, just as y was depending on x. The loss function is obviously at it's global minimum when w == 0. This is what we want to achieve with gradient descent. We can get very close by repeating the optimization step using Variable.backward(). import numpy as np from chainer import Variable w = Variable(np.array([4], dtype=np.float32)) learning_rate = 0.1 max_iters = 100 for i in range(max_iters): loss = w ** 2 loss.backward() # Compute w.grad # Optimize / Update the parameter using gradient descent w.data -= learning_rate * w.grad # Reset gradient for the next iteration, w.grad == 0 w.zerograd() print('Iteration: {} Loss: {}'.format(i, loss.data)) In each iteration, the parameter is updated towards the negative gradient to lower the loss. We are performing a gradient descent. Note that the gradient was scaled by a learning rate before the parameter was updated in order to stabilize the optimization. In fact, if the learning rate were removed from this example, the loss would be stuck at 16, since the derivative of the loss functions is 2 * w, which would cause w to simply jump back and forth between 4 and -4. With the learning rate, we see that the loss decreases with each iteration. loss.data as seen in the output below is an array with one element since it has the same dimensions as w.data. Iteration: 0 Loss: [ 16.] Iteration: 1 Loss: [ 10.24000072] Iteration: 2 Loss: [ 6.55359983] Iteration: 3 Loss: [ 4.19430351] Iteration: 4 Loss: [ 2.68435407] Iteration: 5 Loss: [ 1.71798646] Iteration: 6 Loss: [ 1.09951138] Iteration: 7 Loss: [ 0.70368725] Iteration: 8 Loss: [ 0.45035988] Iteration: 9 Loss: [ 0.2882303] ... Summary This was a very brief introduction to the framework and its fundamental chainer.Variable class, how variable differentiation using computational graphs form the core concept of the framework. In the second and third part of this series, we will implement a complete training algorithm using Chainer. About the Author Hiroyuki Vincent Yamazaki is a graduate student at KTH, Royal Institute of Technology in Sweden, currently conducting research in convolutional neural networks at Keio University in Tokyo, partially using Chainer as a part of a double-degree programme. GitHub  LinkedIn 
Read more
  • 0
  • 0
  • 1463

article-image-functions-swift
Packt
30 Sep 2016
15 min read
Save for later

Functions in Swift

Packt
30 Sep 2016
15 min read
In this article by Dr. Fatih Nayebi, the author of the book Swift 3 Functional Programming, we will see that as functions are the fundamental building blocks in functional programming, this article dives deeper into it and explains all the aspects related to the definition and usage of functions in functional Swift with coding examples. This article will cover the following topics with coding examples: The general syntax of functions Defining and using function parameters Setting internal and external parameters Setting default parameter values Defining and using variadic functions Returning values from functions Defining and using nested functions (For more resources related to this topic, see here.) What is a function? Object-oriented programming (OOP) looks very natural to most developers as it simulates a real-life situation of classes or, in other words, blueprints and their instances, but it brought a lot of complexities and problems such as instance and memory management, complex multithreading, and concurrency programming. Before OOP became mainstream, we were used to developing in procedural languages. In the C programming language, we did not have objects and classes; we would use structs and function pointers. So now we are talking about functional programming that relies mostly on functions just as procedural languages relied on procedures. We are able to develop very powerful programs in C without classes; in fact, most operating systems are developed in C. There are other multipurpose programming languages such as Go by Google that is not object-oriented and is getting very popular because of its performance and simplicity. So, are we going to be able to write very complex applications without classes in Swift? We might wonder why we should do this. Generally, we should not, but attempting it will introduce us to the capabilities of functional programming. A function is a block of code that executes a specific task, can be stored, can persist data, and can be passed around. We define them in standalone Swift files as global functions or inside other building blocks such as classes, structs, enums, and protocols as methods. They are called methods if they are defined in classes but in terms of definition, there is no difference between a function and method in Swift. Defining them in other building blocks enables methods to use the scope of the parent or to be able to change them. They can access the scope of their parent and they have their own scope. Any variable that is defined inside a function is not accessible outside of it. The variables defined inside them and the corresponding allocated memory goes away when the function terminates. Functions are very powerful in Swift. We can compose a program with only functions as functions can receive and return functions, capture variables that exist in the context they were declared, and can persist data inside themselves. To understand the functional programming paradigms, we need to understand the capability of functions in detail. We need to think if we can avoid classes and only use functions so we will cover all the details related to functions in the upcoming sections of this article. The general syntax of functions and methods We can define functions or methods as follows: accessControl func functionName(parameter: ParameterType) throws -> ReturnType { } As we know already, when functions are defined in objects, they become methods. The first step to define a method is to tell the compiler from where it can be accessed. This concept is called access control in Swift and there are three levels of access control. We are going to explain them for methods as follows: Public access: Any entity can access a method that is defined as public if it is in the same module. If an entity is not in the same module, we will need to import the module to be able to call the method. We need to mark our methods and objects as public when we develop frameworks in order to enable other modules to use them. Internal access: Any method that is defined as internal can be accessed from other entities in a module but cannot be accessed from other modules. Private access: Any method that is defined as private can be accessed only from the same source file. By default, if we do not provide the access modifier, a variable or function becomes internal. Using these access modifiers, we can structure our code properly, for instance, we can hide details from other modules if we define an entity as internal. We can even hide the details of a method from other files if we define them as private. Before Swift 2.0, we had to define everything as public or add all source files to the testing target. Swift 2.0 introduced the @testable import syntax that enables us to define internal or private methods that can be accessed from testing modules. Methods can generally be in three forms: Instance methods: We need to obtain an instance of an object (In this article we will refer to classes, structs, and enums as objects) in order to be able to call the method defined in it, and then we will be able to access the scope and data of the object. Static methods: Swift names them type methods also. They do not need any instances of objects and they cannot access the instance data. They are called by putting a dot after the name of the object type (for example, Person.sayHi()). The static methods cannot be overridden by the subclasses of the object that they reside in. Class methods: Class methods are like the static methods but they can be overridden by subclasses. We have covered the keywords that are required for method definitions; now we will concentrate on the syntax that is shared among functions and methods. There are other concepts related to methods that are out of scope of this article as we will concentrate on functional programming in Swift. Continuing to cover the function definition, now comes the func keyword that is mandatory and is used to tell the compiler that it is going to deal with a function. Then comes the function name that is mandatory and is recommended to be camel-cased with the first letter as lowercase. The function name should be stating what the function does and is recommended to be in the form of a verb when we define our methods in objects. Basically, our classes will be named nouns and methods will be verbs that are in the form of orders to the class. In pure functional programming, as the function does not reside in other objects, they can be named by their functionalities. Parameters follow the func name. They will be defined in parentheses to pass arguments to the function. Parentheses are mandatory even if we do not have any parameters. We will cover all aspects of parameters in an upcoming section of this article. Then comes throws, which is not mandatory. A function or method that is marked with the throw keyword may or may not throw errors. At this point, it is enough to know what they are when we see them in a function or method signature. The next entity in a function type declaration is the return type. If a function is not void, the return type will come after the -> sign. The return type indicates the type of entity that is going to be returned from a function. We will cover return types in detail in an upcoming section in this article, so now we can move on to the last piece of function that is present in most programming languages, our beloved { }. We defined functions as blocks of functionality and {} defines the borders of the block so that the function body is declared and execution happens in there. We will write the functionality inside {}. Best practices in function definition There are proven best practices for function and method definition provided by amazing software engineering resources, such as Clean Code, Code Complete, and Coding Horror, that we can summarize as follows: Try not to exceed 8-10 lines of code in each function as shorter functions or methods are easier to read, understand, and maintain. Keep the number of parameters minimal because the more parameters a function has, the more complex it is. Functions should have at least one parameter and one return value. Avoid using type names in function names as it is going to be redundant. Aim for one and only one functionality in a function. Name a function or method in a way that it describes its functionality properly and is easy to understand. Name functions and methods consistently. If we have a connect function, we can have a disconnect one. Write functions to solve the current problem and generalize it when needed. Try to avoid what if scenarios as probably you aren't going to need it (YAGNI). Calling functions We have covered a general syntax to define a function and method if it resides in an object. Now it is time to talk about how we call our defined functions and methods. To call a function, we will use its name and provide its required parameters. There are complexities with providing parameters that we will cover in the upcoming section. For now, we are going to cover the most basic type of parameter providing as follows: funcName(paramName, secondParam: secondParamName) This type of function calling should be familiar to Objective-C developers as the first parameter name is not named and the rest are named. To call a method, we need to use the dot notation provided by Swift. The following examples are for class instance methods and static class methods: let someClassInstance = SomeClass() someClassInstance.funcName(paramName, secondParam: secondParamName) StaticClass.funcName(paramName, secondParam: secondParamName)   Defining and using function parameters In function definition, parameters follow the function name and they are constants by default so we will not able to alter them inside the function body if we do not mark them with var. In functional programming, we avoid mutability, therefore, we would never use mutable parameters in functions. Parameters should be inside parentheses. If we do not have any parameters, we simply put open and close parentheses without any characters between them: func functionName() { } In functional programming, it is important to have functions that have at least one parameter. We will explain why it is important in upcoming sections. We can have multiple parameters separated by commas. In Swift, parameters are named so we need to provide the parameter name and type after putting a colon, as shown in the following example: func functionName(parameter: ParameterType, secondParameter: ParameterType) { } // To call: functionName(parameter, secondParameter: secondParam) ParameterType can also be an optional type so the function becomes the following if our parameters need to be optionals: func functionName(parameter: ParameterType?, secondParameter: ParameterType?) { } Swift enables us to provide external parameter names that will be used when functions are called. The following example presents the syntax: Func functionName(externalParamName localParamName: ParameterType) // To call: functionName(externalParamName: parameter) Only the local parameter name is usable in the function body. It is possible to omit the parameter names with the _ syntax, for instance, if we do not want to provide any parameter name when the function is called, we can use _ as externalParamName for the second or subsequent parameters. If we want to have a parameter name for the first parameter name in function calls, we can basically provide the local parameter name as external also. In this article, we are going to use the default function parameter definition. Parameters can have default values as follows: func functionName(parameter: Int = 3) { print("(parameter) is provided." } functionName(5) // prints "5 is provided." functionName() // prints "3 is provided" Parameters can be defined as inout to enable function callers obtaining parameters that are going to be changed in the body of a function. As we can use tuples for function returns, it is not recommended to use inout parameters unless we really need them. We can define function parameters as tuples. For instance, the following example function accepts a tuple of the (Int, Int) type: func functionWithTupleParam(tupleParam: (Int, Int)) {} As, under the hood, variables are represented by tuples in Swift, the parameters to a function can also be tuples. For instance, let's have a simple convert function that takes an array of Int and a multiplier and converts it to a different structure. Let's not worry about the implementation of this function for now: let numbers = [3, 5, 9, 10] func convert(numbers: [Int], multiplier: Int) -> [String] { let convertedValues = numbers.enumerate().map { (index, element) in return "(index): (element * multiplier)" } return convertedValues } If we use this function as convert(numbers, multiplier: 3), the result is going to be ["0: 9", "1: 15", "2: 27", "3: 30"]. We can call our function with a tuple. Let's create a tuple and pass it to our function: let parameters = (numbers, multiplier: 3) convert(parameters) The result is identical to our previous function call. However, passing tuples in function calls is deprecated and will be removed in Swift 3.0, so it is not recommended to use them. We can define higher-order functions that can receive functions as parameters. In the following example, we define funcParam as a function type of (Int, Int) -> Int: func functionWithFunctionParam(funcParam: (Int, Int)-> Int) In Swift, parameters can be of a generic type. The following example presents a function that has two generic parameters. In this syntax, any type (for example, T or V) that we put inside <> should be used in parameter definition: func functionWithGenerics<T, V>(firstParam: T, secondParam) Defining and using variadic functions Swift enables us to define functions with variadic parameters. A variadic parameter accepts zero or more values of a specified type. Variadic parameters are similar to array parameters but they are more readable and can only be used as the last parameter in the multiparameter functions. As variadic parameters can accept zero values, we will need to check whether it is empty. The following example presents a function with variadic parameters of the String type: func greet(names: String…) { for name in names { print("Greetings, (name)") } } // To call this function greet("Steve", "Craig") // prints twice greet("Steve", "Craig", "Johny") // prints three times Returning values from functions If we need our function to return a value, tuple, or another function, we can specify it by providing ReturnType after ->. For instance, the following example returns String: func functionName() -> String { } Any function that has ReturnType in its definition should have a return keyword with the matching type in its body. Return types can be optionals in Swift so the function becomes as follows if the return needs to be optional: func functionName() -> String? { } Tuples can be used to provide multiple return values. For instance, the following function returns tuple of the (Int, String) type: func functionName() -> (code: Int, status: String) { } As we are using parentheses for tuples, we should avoid using parentheses for single return value functions. Tuple return types can be optional too so the syntax becomes as follows: func functionName() -> (code: Int, status: String)? { } This syntax makes the entire tuple optional; if we want to make only status optional, we can define the function as follows: func functionName() -> (code: Int, status: String?) { } In Swift, functions can return functions. The following example presents a function with the return type of a function that takes two Int values and returns Int: func funcName() -> (Int, Int)-> Int {} If we do not expect a function to return any value, tuple, or function, we simply do not provide ReturnType: func functionName() { } We could also explicitly declare it with the Void keyword: func functionName() { } In functional programming, it is important to have return types in functions. In other words, it is a good practice to avoid functions that have Void as return types. A function with the Void return type typically is a function that changes another entity in the code; otherwise, why would we need to have a function? OK, we might have wanted to log an expression to the console/log file or write data to a database or file to a filesystem. In these cases, it is also preferable to have a return or feedback related to the success of the operation. As we try to avoid mutability and stateful programming in functional programming, we can assume that our functions will have returns in different forms. This requirement is in line with mathematical underlying bases of functional programming. In mathematics, a simple function is defined as follows: y = f(x) or f(x) -> y Here, f is a function that takes x and returns y. Therefore, a function receives at least one parameter and returns at least a value. In functional programming, following the same paradigm makes reasoning easier, function composition possible, and code more readable. Summary This article explained the function definition and usage in detail by giving examples for parameter and return types. You can also refer the following books on the similar topics: Protocol-Oriented Programming with Swift: https://www.packtpub.com/application-development/protocol-oriented-programming-swift OpenStack Object Storage (Swift) Essentials: https://www.packtpub.com/virtualization-and-cloud/openstack-object-storage-swift-essentials Implementing Cloud Storage with OpenStack Swift: https://www.packtpub.com/virtualization-and-cloud/implementing-cloud-storage-openstack-swift Resources for Article: Further resources on this subject: Introducing the Swift Programming Language [article] Swift for Open Source Developers [article] Your First Swift App [article]
Read more
  • 0
  • 0
  • 3884

article-image-learning-how-manage-records-visualforce
Packt
29 Sep 2016
7 min read
Save for later

Learning How to Manage Records in Visualforce

Packt
29 Sep 2016
7 min read
In this article by Keir Bowden, author of the book, Visualforce Development Cookbook - Second Edition we will cover the following styling fields and table columns as per requirement One of the common use cases for Visualforce pages is to simplify, streamline, or enhance the management of sObject records. In this article, we will use Visualforce to carry out some more advanced customization of the user interface—redrawing the form to change available picklist options, or capturing different information based on the user's selections. (For more resources related to this topic, see here.) Styling fields as required Standard Visualforce input components, such as <apex:inputText />, can take an optional required attribute. If set to true, the component will be decorated with a red bar to indicate that it is required, and form submission will fail if a value has not been supplied, as shown in the following screenshot: In the scenario where one or more inputs are required and there are additional validation rules, for example, when one of either the Email or Phone fields is defined for a contact, this can lead to a drip feed of error messages to the user. This is because the inputs make repeated unsuccessful attempts to submit the form, each time getting slightly further in the process. Now, we will create a Visualforce page that allows a user to create a contact record. The Last Name field is captured through a non-required input decorated with a red bar identical to that created for required inputs. When the user submits the form, the controller validates that the Last Name field is populated and that one of the Email or Phone fields is populated. If any of the validations fail, details of all errors are returned to the user. Getting ready This topic makes use of a controller extension so this must be created before the Visualforce page. How to do it… Navigate to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the RequiredStylingExt.cls Apex class from the code downloaded into the Apex Class area. Click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Click on the New button. Enter RequiredStyling in the Label field. Accept the default RequiredStyling that is automatically generated for the Name field. Paste the contents of the RequiredStyling.page file from the code downloaded into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Locate the entry for the RequiredStyling page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the RequiredStyling page to create a new contact record: https://<instance>/apex/RequiredStyling. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com. Clicking on the Save button without populating any of the fields results in the save failing with a number of errors: The Last Name field is constructed from a label and text input component rather than a standard input field, as an input field would enforce the required nature of the field and stop the submission of the form: <apex:pageBlockSectionItem > <apex:outputLabel value="Last Name"/> <apex:outputPanel id="detailrequiredpanel" layout="block" styleClass="requiredInput"> <apex:outputPanel layout="block" styleClass="requiredBlock" /> <apex:inputText value="{!Contact.LastName}"/> </apex:outputPanel> </apex:pageBlockSectionItem> The required styles are defined in the Visualforce page rather than relying on any existing Salesforce style classes to ensure that if Salesforce changes the names of its style classes, this does not break the page. The controller extension save action method carries out validation of all fields and attaches error messages to the page for all validation failures: if (String.IsBlank(cont.name)) { ApexPages.addMessage(new ApexPages.Message( ApexPages.Severity.ERROR, 'Please enter the contact name')); error=true; } if ( (String.IsBlank(cont.Email)) && (String.IsBlank(cont.Phone)) ) { ApexPages.addMessage(new ApexPages.Message( ApexPages.Severity.ERROR, 'Please supply the email address or phone number')); error=true; } Styling table columns as required When maintaining records that have required fields through a table, using regular input fields can end up with an unsightly collection of red bars striped across the table. Now, we will create a Visualforce page to allow a user to create a number of contact records via a table. The contact Last Name column header will be marked as required, rather than the individual inputs. Getting ready This topic makes use of a custom controller, so this will need to be created before the Visualforce page. How to do it… First, create the custom controller by navigating to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the RequiredColumnController.cls Apex class from the code downloaded into the Apex Class area. Click on the Save button. Next, create a Visualforce page by navigating to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Click on the New button. Enter RequiredColumn in the Label field. Accept the default RequiredColumn that is automatically generated for the Name field. Paste the contents of the RequiredColumn.page file from the code downloaded into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Locate the entry for the RequiredColumn page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the RequiredColumn page: https://<instance>/apex/RequiredColumn. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com. The Last Name column header is styled in red, indicating that this is a required field. Attempting to create a record where only First Name is specified results in an error message being displayed against the Last Name input for the particular row: The Visualforce page sets the required attribute on the inputField components in the Last Name column to false, which removes the red bar from the component: <apex:column > <apex:facet name="header"> <apex:outputText styleclass="requiredHeader" value="{!$ObjectType.Contact.fields.LastName.label}" /> </apex:facet> <apex:inputField value="{!contact.LastName}" required="false"/> </apex:column> The Visualforce page custom controller Save method checks if any of the fields in the row are populated, and if this is the case, it checks that the last name is present. If the last name is missing from any record, an error is added. If an error is added to any record, the save does not complete: if ( (!String.IsBlank(cont.FirstName)) || (!String.IsBlank(cont.LastName)) ) { // a field is defined - check for last name if (String.IsBlank(cont.LastName)) { error=true; cont.LastName.addError('Please enter a value'); } String.IsBlank() is used as this carries out three checks at once: to check that the supplied string is not null, it is not empty, and it does not only contain whitespace. Summary Thus in this article we successfully mastered the techniques to style fields and table columns as per the custom needs. Resources for Article: Further resources on this subject: Custom Components in Visualforce [Article] Visualforce Development with Apex [Article] Using Spring JMX within Java Applications [Article]
Read more
  • 0
  • 0
  • 8891

article-image-qt-style-sheets
Packt
29 Sep 2016
26 min read
Save for later

QT Style Sheets

Packt
29 Sep 2016
26 min read
In this article by Lee Zhi Eng, author of the book, Qt5 C++ GUI Programming Cookbook, we will see how Qt allows us to easily design our program's user interface through a method which most people are familiar with. Qt not only provides us with a powerful user interface toolkit called Qt Designer, which enables us to design our user interface without writing a single line of code, but it also allows advanced users to customize their user interface components through a simple scripting language called Qt Style Sheets. (For more resources related to this topic, see here.) In this article, we will cover the following recipes: Using style sheets with Qt Designer Basic style sheets customization Creating a login screen using style sheet Use style sheets with Qt Designer In this example, we will learn how to change the look and feel of our program and make it look more professional by using style sheets and resources. Qt allows you to decorate your GUIs (Graphical User Interfaces) using a style sheet language called Qt Style Sheets, which is very similar to CSS (Cascading Style Sheets) used by web designers to decorate their websites. How to do it… The first thing we need to do is open up Qt Creator and create a new project. If this is the first time you have used Qt Creator, you can either click the big button that says New Project with a + sign, or simply go to File | New File or New Project. Then, select Application under the Project window and select Qt Widgets Application. After that, click the Choose button at the bottom. A window will then pop out and ask you to insert the project name and its location. Once you're done with that, click Next several times and click the Finish button to create the project. We will just stick to all the default settings for now. Once the project is created, the first thing you will see is the panel with tons of big icons on the left side of the window which is called the Mode Selector panel; we will discuss this more later in the How it works section. Then, you will also see all your source files listed on the Side Bar panel which is located right next to the Mode Selector panel. This is where you can select which file you want to edit, which, in this case, is mainwindow.ui because we are about to start designing the program's UI! Double click mainwindow.ui and you will see an entirely different interface appearing out of nowhere. Qt Creator actually helped you to switch from the script editor to the UI editor (Qt Designer) because it detected .ui extension on the file you're trying to open. You will also notice that the highlighted button on the Mode Selector panel has changed from the Edit button to the Design button. You can switch back to the script editor or change to any other tools by clicking one of the buttons located at the upper half of the Mode Selector panel. Let's go back to the Qt Designer and look at the mainwindow.ui file. This is basically the main window of our program (as the file name implies) and it's empty by default, without any widget on it. You can try to compile and run the program by pressing the Run button (green arrow button) at the bottom of the Mode Selector panel, and you will see an empty window popping out once the compilation is complete: Now, let's add a push button to our program's UI by clicking on the Push Button item in the widget box (under the Buttons category) and drag it to your main window in the form editor. Then, keep the push button selected and now you will see all the properties of this button inside the property editor on the right side of your window. Scroll down to somewhere around the middle and look for a property called styleSheet. This is where you apply styles to your widget, which may or may not inherit to its children or grandchildren recursively depending on how you set your style sheet. Alternatively, you can also right click on any widget in your UI at the form editor and select Change Style Sheet from the pop up menu. You can click on the input field of the styleSheet property to directly write the style sheet code, or click on the … button besides the input field to open up the Edit Style Sheet window which has a bigger space for writing longer style sheet code. At the top of the window you can find several buttons such as Add Resource, Add Gradient, Add Color, and Add Font that can help you to kick-start your coding if you don't remember the properties' names. Let's try to do some simple styling with the Edit Style Sheet window. Click Add Color and choose color. Pick a random color from the color picker window, let's say, a pure red color. Then click Ok. Now, you will see a line of code has been added to the text field on the Edit Style Sheet window, which in my case is as follows: color: rgb(255, 0, 0); Click the Ok button and now you will see the text on your push button has changed to a red color. How it works Let's take a bit of time to get ourselves familiar with Qt Designer's interface before we start learning how to design our own UI: Menu bar: The menu bar houses application-specific menus which provide easy access to essential functions such as create new projects, save files, undo, redo, copy, paste, and so on. It also allows you to access development tools that come with Qt Creator, such as the compiler, debugger, profiler, and so on. Widget box: This is where you can find all the different types of widgets provided by Qt Designer. You can add a widget to your program's UI by clicking one of the widgets from the widget box and dragging it to the form editor. Mode selector: The mode selector is a side panel that places shortcut buttons for easy access to different tools. You can quickly switch between the script editor and form editor by clicking the Edit or Design buttons on the mode selector panel which is very useful for multitasking. You can also easily navigate to the debugger and profiler tools in the same speed and manner. Build shortcuts: The build shortcuts are located at the bottom of the mode selector panel. You can build, run, and debug your project easily by pressing the shortcut buttons here. Form editor: Form editor is where you edit your program's UI. You can add different widgets to your program by selecting a widget from the widget box and dragging it to the form editor. Form toolbar: From here, you can quickly select a different form to edit, click the drop down box located above the widget box and select the file you want to open with Qt Designer. Beside the drop down box are buttons for switching between different modes for the form editor and also buttons for changing the layout of your UI. Object inspector: The object inspector lists out all the widgets within your current .ui file. All the widgets are arranged according to its parent-child relationship in the hierarchy. You can select a widget from the object inspector to display its properties in the property editor. Property editor: Property editor will display all the properties of the widget you selected either from the object inspector window or the form editor window. Action Editor and Signals & Slots Editor: This window contains two editors – Action Editor and the Signals & Slots Editor which can be accessed from the tabs below the window. The action editor is where you create actions that can be added to a menu bar or toolbar in your program's UI. Output panes: Output panes consist of several different windows that display information and output messages related to script compilation and debugging. You can switch between different output panes by pressing the buttons that carry a number before them, such as 1-Issues, 2-Search Results, 3-Application Output, and so on. There's more… In the previous section, we discussed how to apply style sheets to Qt widgets through C++ coding. Although that method works really well, most of the time the person who is in charge of designing the program's UI is not the programmer himself, but a UI designer who specializes in designing user-friendly UI. In this case, it's better to let the UI designer design the program's layout and style sheet with a different tool and not mess around with the code. Qt provides an all-in-one editor called the Qt Creator. Qt Creator consists of several different tools, such as script editor, compiler, debugger, profiler, and UI editor. The UI editor, which is also called the Qt Designer, is the perfect tool for designers to design their program's UI without writing any code. This is because Qt Designer adopted the What-You-See-Is-What-You-Get approach by providing accurate visual representation of the final result, which means whatever you design with Qt Designer will turn out exactly the same when the program is compiled and run. The similarities between Qt Style Sheets and CSS are as follows: CSS: h1 { color: red; background-color: white;} Qt Style Sheets: QLineEdit { color: red; background-color: white;} As you can see, both of them contain a selector and a declaration block. Each declaration contains a property and a value, separated by a colon. In Qt, a style sheet can be applied to a single widget by calling QObject::setStyleSheet() function in C++ code. For example: myPushButton->setStyleSheet("color : blue"); The preceding code will turn the text of a button with the variable name myPushButton to a blue color. You can also achieve the same result by writing the declaration in the style sheet property field in Qt Designer. We will discuss more about Qt Designer in the next section. Qt Style Sheets also supports all the different types of selectors defined in CSS2 standard, including Universal selector, Type selector, Class selector, ID selector, and so on, which allows us to apply styling to a very specific individual or group of widgets. For instance, if we want to change the background color of a specific line edit widget with the object name usernameEdit, we can do this by using an ID selector to refer to it: QLineEdit#usernameEdit { background-color: blue } To learn about all the selectors available in CSS2 (which are also supported by Qt Style Sheets), please refer to this document: http://www.w3.org/TR/REC-CSS2/selector.html. Basic style sheet customization In the previous example, you learned how to apply a style sheet to a widget with Qt Designer. Let's go crazy and push things further by creating a few other types of widgets and change their style properties to something bizarre for the sake of learning. This time, however, we will not apply the style to every single widget one by one, but we will learn to apply the style sheet to the main window and let it inherit down the hierarchy to all the other widgets so that the style sheet is easier to manage and maintain in long run. How to do it… First of all, let's remove the style sheet from the push button by selecting it and clicking the small arrow button besides the styleSheet property. This button will revert the property to the default value, which in this case is the empty style sheet. Then, add a few more widgets to the UI by dragging them one by one from the widget box to the form editor. I've added a line edit, combo box, horizontal slider, radio button, and a check box. For the sake of simplicity, delete the menu bar, main toolbar, and the status bar from your UI by selecting them from the object inspector, right click, and choose Remove. Now your UI should look something similar to this: Select the main window either from the form editor or the object inspector, then right click and choose Change Stylesheet to open up the Edit Style Sheet. border: 2px solid gray; border-radius: 10px; padding: 0 8px; background: yellow; Now what you will see is a completely bizarre-looking UI with everything covered in yellow with a thick border. This is because the precedingstyle sheet does not have any selector, which means the style will apply to the children widgets of the main window all the way down the hierarchy. To change that, let's try something different: QPushButton { border: 2px solid gray; border-radius: 10px; padding: 0 8px; background: yellow; } This time, only the push button will get the style described in the preceding code, and all other widgets will return to the default styling. You can try to add a few more push buttons to your UI and they will all look the same: This happens because we specifically tell the selector to apply the style to all the widgets with the class called QPushButton. We can also apply the style to just one of the push buttons by mentioningit's name in thestyle sheet, like so: QPushButton#pushButton_3 { border: 2px solid gray; border-radius: 10px; padding: 0 8px; background: yellow; } Once you understand this method, we can add the following code to the style sheet : QPushButton { color: red; border: 0px; padding: 0 8px; background: white; } QPushButton#pushButton_2 { border: 1px solid red; border-radius: 10px; } QPushButton#pushButton_3 { border: 2px solid gray; border-radius: 10px; padding: 0 8px; background: yellow; } What it does is basically change the style of all the push buttons as well as some properties of a specific button named pushButton_2. We keep the style sheet of pushButton_3 as it is. Now the buttons will look like this: The first set of style sheet will change all widgets of QPushButton type to a white rectangular button with no border and red text. Then the second set of style sheets changes only the border of a specific QPushButton widget by name of pushButton_2. Notice that the background color and text color of pushButton_2 remain as white and red color respectively because we didn't override them in the second set of style sheet, hence it will follow back the style described in the first set of style sheet since it's applicable to all QPushButton type widgets. Do notice that the text of the third button has also changed to red because we didn't describe the color property in the third set of style sheet. After that, create another set of style using the universal selector, like so: * { background: qradialgradient(cx: 0.3, cy: -0.4, fx: 0.3, fy: -0.4, radius: 1.35, stop: 0 #fff, stop: 1 #888); color: rgb(255, 255, 255); border: 1px solid #ffffff; } The universal selector will affect all the widgets regardless of their type. Therefore, the preceding style sheet will apply a nice gradient color to all the widgets' background as well as setting their text as white color and giving them a one-pixel solid outline which is also in white color. Instead of writing the name of the color (that is, white), we can also use the rgb function (rgb(255, 255, 255)) or hex code (#ffffff) to describe the color value. Just as before, the preceding style sheet will not affect the push buttons because we have already given them their own styles which will override the general style described in the universal selector. Just remember that in Qt, the style which is more specific will ultimately be used when there is more than one style having influence on a widget. This is how the UI will look like now: How it works If you are ever involved in web development using HTML and CSS, Qt's style sheet works exactly the same way as CSS. Style Sheet provides the definitions for describing the presentation of the widgets – what the colors are for each element in the widget group, how thick the border should be, and so on and so forth. If you specify the name of the widget to the style sheet, it will change the style of a particular push button widget with the name you provide. All the other widgets will not be affected and will remain as the default style. To change the name of a widget, select the widget either from the form editor or the object inspector and change the property called objectName in the property window. If you have used the ID selector previously to change the style of the widget, changing its object name will break the style sheet and lose the style. To fix this problem, simply change the object name in the style sheet as well. Creating a login screen using style sheet Next, we will learn how to put all the knowledge we learned in the previous example together and create a fake graphical login screen for an imaginary operating system. Style sheet is not the only thing you need to master in order to design a good UI. You will also need to learn how to arrange the widgets neatly using the layout system in Qt Designer. How to do it… The first thing we need to do is design the layout of the graphical login screen before we start doing anything. Planning is very important in order to produce good software. The following is a sample layout design I made to show you how I imagine the login screen will look. Just a simple line drawing like this is sufficient as long as it conveys the message clearly: Now that we know exactly how the login screen should look, let's go back to Qt Designer again. We will be placing the widgets at the top panel first, then the logo and the login form below it. Select the main window and change its width and height from 400 and 300 to 800 and 600 respectively because we'll need a bigger space in which to place all the widgets in a moment. Click and drag a label under theDisplay Widgets category from the widget box to the form editor. Change the objectName property of the label to currentDateTime and change its Text property to the current date and time just for display purposes, such as Monday, 25-10-2015 3:14 PM. Click and drag a push button under the Buttons category to the form editor. Repeat this process one more time because we have two buttons on the top panel. Rename the two buttons to restartButton and shutdownButton respectively. Next, select the main window and click the small icon button on the form toolbar that says Lay Out Vertically when you mouse-over it. Now you will see the widgets are being automatically arranged on the main window, but it's not exactly what we want yet. Click and drag a horizontal layout widget under the Layouts category to the main window. Click and drag the two push buttons and the text label into the horizontal layout. Now you will see the three widgets being arranged in a horizontal row, but vertically they are located in the middle of the screen. The horizontal arrangement is almost correct, but the vertical position is totally off. Click and drag a vertical spacer from the Spacers category and place it below the horizontal layout we just created previously (below the red rectangular outline). Now you will see all the widgets are being pushed to the top by the spacer. Now, place a horizontal spacer between the text label and the two buttons to keep them apart. This will make the text label always stick to the left and the buttons align to the right. Set both the Horizontal Policy and Vertical Policy properties of the two buttons to Fixed and set the minimumSize property to 55x55. Then, set the text property of the buttons to empty as we will be using icons instead of text. We will learn how to place an icon in the button widgets in the following section: Now your UI should look similar to this: Next, we will be adding the logo by using the following steps: Add a horizontal layout between the top panel and the vertical spacer to serve as a container for the logo. After adding the horizontal layout, you will find the layout is way too thin in height to be able to add any widget to it. This is because the layout is empty and it's being pushed by the vertical spacer below it into zero height. To solve this problem, we can set its vertical margin (either layoutTopMargin or layoutBottomMargin) to be temporarily bigger until a widget is added to the layout. Next, add a label to the horizontal layout that you just created and rename it to logo. We will learn more about how to insert an image into the label to use it as logo in the next section. For now, just empty out the text property and set both its Horizontal Policy and Vertical Policy properties to Fixed. Then, set the minimumSize property to 150x150. Set the vertical margin of the layout back to zero if you haven't done so. The logo now looks invisible, so we will just place a temporary style sheet to make it visible until we add an image to it in the next section. The style sheet is really simple: border: 1px solid; Now your UI should look something similar to this: Now let's create the login form by using the following steps: Add a horizontal layout between the logo's layout and the vertical spacer. Just as we did previously, set the layoutTopMargin property to a bigger number (that is,100) so that you can add a widget to it more easily. After that, add a vertical layout inside the horizontal layout you just created. This layout will be used as a container for the login form. Set its layoutTopMargin to a number lower than that of the horizontal layout (that is, 20) so that we can place widgets in it. Next, right click the vertical layout you just created and choose Morph into -> QWidget. The vertical layout is now being converted into an empty widget. This step is essential because we will be adjusting the width and height of the container for the login form. A layout widget does not contain any properties for width and height, but only margins, due to the fact that a layout will expand toward the empty space surrounding it, which does make sense, considering that it does not have any size properties. After you have converted the layout to a QWidget object, it will automatically inherit all the properties from the widget class and so we are now able to adjust its size to suit our needs. Rename the QWidget object which we just converted from the layout to loginForm and change both its Horizontal Policy and Vertical Policy properties to Fixed. Then, set the minimumSize property to 350x200. Since we already placed the loginForm widget inside the horizontal layout, we can now set its layoutTopMargin property back to zero. Add the same style sheet as the logo to the loginForm widget to make it visible temporarily, except this time we need to add an ID selector in front so that it will only apply the style to loginForm and not its children widgets: #loginForm { border: 1px solid; } Now your UI should look something like this: We are not done with the login form yet. Now that we have created the container for the login form, it's time to put more widgets into the form: Place two horizontal layouts into the login form container. We need two layouts as one for the username field and another for the password field. Add a label and a line edit to each of the layouts you just added. Change the text property of the upper label to Username: and the one below as Password:. Then, rename the two line edits as username and password respectively. Add a push button below the password layout and change its text property to Login. After that, rename it as loginButton. You can add a vertical spacer between the password layout and the login button to distance them slightly. After the vertical spacer has been placed, change its sizeType property to Fixed and change the Height to 5. Now, select the loginForm container and set all its margins to 35. This is to make the login form look better by adding some space to all its sides. You can also set the Height property of the username, password, and loginButton widgets to 25 so that they don't look so cramped. Now your UI should look something like this: We're not done yet! As you can see the login form and the logo are both sticking to the top of the main window due to the vertical spacer below them. The logo and the login form should be placed at the center of the main window instead of the top. To fix this problem use the following steps: Add another vertical spacer between the top panel and the logo's layout. This way it will counter the spacer at the bottom which balances out the alignment. If you think that the logo is sticking too close to the login form, you can also add a vertical spacer between the logo's layout and the login form's layout. Set its sizeType property to Fixed and the Height property to 10. Right click the top panel's layout and choose Morph into -> QWidget. Then, rename it topPanel. The reason why the layout has to be converted into QWidget because we cannot apply style sheets to a layout, as it doesn't have any properties other than margins. Currently you can see there is a little bit of margin around the edges of the main window – we don't want that. To remove the margins, select the centralWidget object from the object inspector window, which is right under the MainWindow panel, and set all the margin values to zero. At this point, you can run the project by clicking the Run button (withgreen arrow icon) to see what your program looks like now. If everything went well, you should see something like this: After we've done the layout, it's time for us to add some fanciness to the UI using style sheets! Since all the important widgets have been given an object name, it's easier for us to apply the style sheets to it from the main window, since we will only write the style sheets to the main window and let them inherit down the hierarchy tree. Right click on MainWindow from the object inspector window and choose Change Stylesheet. Add the following code to the style sheet: #centralWidget { background: rgba(32, 80, 96, 100); } Now you will see that the background of the main window changes its color. We will learn how to use an image for the background in the next section so the color is just temporary. In Qt, if you want to apply styles to the main window itself, you must apply it to its central widget instead of the main window itself because the window is just a container. Then, we will add a nice gradient color to the top panel: #topPanel { background-color: qlineargradient(spread:reflect, x1:0.5, y1:0, x2:0, y2:0, stop:0 rgba(91, 204, 233, 100), stop:1 rgba(32, 80, 96, 100)); } After that, we will apply black color to the login form and make it look semi-transparent. After that, we will also make the corners of the login form container slightly rounded by setting the border-radius property: #loginForm { background: rgba(0, 0, 0, 80); border-radius: 8px; } After we're done applying styles to the specific widgets, we will apply styles to the general types of widgets instead: QLabel { color: white; } QLineEdit { border-radius: 3px; } The style sheets above will change all the labels' texts to a white color, which includes the text on the widgets as well because, internally, Qt uses the same type of label on the widgets that have text on it. Also, we made the corners of the line edit widgets slightly rounded. Next, we will apply style sheets to all the push buttons on our UI: QPushButton { color: white; background-color: #27a9e3; border-width: 0px; border-radius: 3px; } The preceding style sheet changes the text of all the buttons to a white color, then sets its background color to blue, and makes its corners slightly rounded as well. To push things even further, we will change the color of the push buttons when we mouse-over it, using the keyword hover: QPushButton:hover { background-color: #66c011; } The preceding style sheet will change the background color of the push buttons to green when we mouse-over. We will talk more about this in the following section. You can further adjust the size and margins of the widgets to make them look even better.Remember to remove the border line of the login form by removing the style sheet which we applied directly to it earlier. Now your login screen should look something like this: How it works This example focuses more on the layout system of Qt. The Qt layout system provides a simple and powerful way of automatically arranging child widgets within a widget to ensure that they make good use of the available space. The spacer items used in the preceding example help to push the widgets contained in a layout outward to create spacing along the width of the spacer item. To locate a widget to the middle of the layout, put two spacer items to the layout, one on the left side of the widget and another on the right side of the widget. The widget will then be pushed to the middle of the layout by the two spacers. Summary So in this article we saw how Qt allows us to easily design our program's user interface through a method which most people are familiar with. We also covered the toolkit, Qt Designer, which enables us to design our user interface without writing a single line of code. Finally, we saw how to create a login screen. For more information on Qt5 and C++ you can check other books by Packt, mentioned as follows: Qt 5 Blueprints: https://www.packtpub.com/application-development/qt-5-blueprints Boost C++ Application Development Cookbook: https://www.packtpub.com/application-development/boost-c-application-development-cookbook Learning Boost C++ Libraries: https://www.packtpub.com/application-development/learning-boost-c-libraries Resources for Article: Further resources on this subject: OpenCart Themes: Styling Effects of jQuery Plugins [article] Responsive Web Design [article] Gearing Up for Bootstrap 4 [article]
Read more
  • 0
  • 0
  • 31767

article-image-deep-learning-torch
Preetham Sreenivas
29 Sep 2016
10 min read
Save for later

Deep Learning with Torch

Preetham Sreenivas
29 Sep 2016
10 min read
Torch is a scientific computing framework built on top of Lua[JIT]. The nn package and the ecosystem around it provide a very powerful framework for building deep learning models, striking a perfect balance between speed and flexibility. It is used at Facebook AI Research(FAIR), Twitter Cortex, DeepMind, Yann LeCun's group at NYU, Fei-Fei Li's at Stanford, and many more industrial and academic labs. If you are like me, and don't like writing equations for backpropagation every time you want to try a simple model, Torch is a great solution. With Torch, you can also do pretty much anything you can imagine, whether that is writing custom loss functions, dreaming up an arbitrary acyclic graph network, or even using multiple GPUs or loading pre-trained models on imagenet from caffe model-zoo (yes, you can load models trained in caffe with a single line). Without further ado, let's jump right into the awesome world of deep learning. Prerequisites Some knowledge of deep learning—A Primer, Bengio's deep learning book, Hinton's Coursera course. A bit of Lua. Its syntax is very C-like and can be picked up fairly quickly if you know Python or JavaScript—Learn Lua in 15 minutes, Torch For Numpy Users. A machine with Torch installed since this is intended to be hands-on. On Ubuntu 12+ and Mac OS X, installing Torch looks like this: # in a terminal, run the commands WITHOUT sudo $ git clone https://github.com/torch/distro.git ~/torch --recursive $ cd ~/torch; bash install-deps; $ ./install.sh # On Linux with bash $ source ~/.bashrc # On OSX or in Linux with no bash. $ source ~/.profile Once you’ve installed Torch, you can run a Torch script using: $ th script.lua # alternatively you can fire up a terminal torch interpreter using th -i $ th -i # and run multiple scripts one by one, the variables will be accessible to other scripts > dofile 'script1.lua' > dofile 'script2.lua' > print(variable) -- variable from either of these scripts. The sections below are very code intensive, but you can run these commands from Torch's terminal interpreter. $th -i Building a Model: The Basics A module is the basic building block of any Torch model. It has forward and backward methods for forward and backward passes of backpropagation. You can combine them using containers, and of course, calling forward and backward on containers propagates inputs and gradients correctly. -- A simple mlp model with sigmoids require 'nn' linear1 = nn.Linear(100,10) -- A linear layer Module linear2 = nn.Linear(10,2) -- You can combine modulues using containers, sequential is the most used one model = nn.Sequential() -- A container model:add(linear1) model:add(nn.Sigmoid()) model:add(linear2) model:add(nn.Sigmoid()) -- the forward step input = torch.rand(100) target = torch.rand(2) output = linear:forward(input) Now we need a criterion to measure how well our model is performing, in other words, a loss function. nn.Criterion is the abstract class that all loss functions inherit. It provides forward and backward methods, computing loss and gradients respectively. Torch provides most of the commonly used criterions out of the box. It isn't much of an effort to write your own either. criterion = nn.MSECriterioin() -- mean squared error criterion loss = criterion:forward(output,target) gradientsAtOutput = criterion:backward(output,target) -- To perform the backprop step, we need to pass these gradients to the backward -- method of the model gradAtInput = model:backward(input,gradientsAtOutput) lr = 0.1 -- learning rate for our model model:updateParameters(lr) -- updates the parameters using the lr parameter. The updateParameters method just subtracts the model parameters by gradients scaled by the learning rate. This is the vanilla stochastic gradient descent. Typically, the updates we do are more complex. For example, if we want to use momentum, we need to keep a track of updates we did in the previous epoch. There are a lot more fancy optimization schemes such as RMSProp, adam, adagrad, and L-BFGS that do more complex things like adapting learning rate, momentum factor, and so on. The optim package provides optimization routines out of the box. Dataset We'll use the German Traffic Sign Recognition Benchmark(GTSRB) dataset. This dataset has 43 classes of traffic signs of varying sizes, illuminations and occlusions. There are 39,000 training images and 12,000 test images. Traffic signs in each of the images are not centered and they have a 10% border around them. I have included a shell script for downloading the data along with the code for this tutorial in this github repo.[1] git clone https://github.com/preethamsp/tutorial.gtsrb.torch.git cd tutorial.gtsrb.torch/datasets bash download_gtsrb.sh Model Let's build a downsized vgg style model with what we've learned. function createModel() require 'nn' nbClasses = 43 local net = nn.Sequential() --[[building block: adds a convolution layer, batch norm layer and a relu activation to the net]]-- function ConvBNReLU(nInputPlane, nOutputPlane) The code in the repo is much more polished than the snippets in the tutorial. It is modular and allows you to change the model and/or datasets easily. -- kernel size = (3,3), stride = (1,1), padding = (1,1) net:add(nn.SpatialConvolution(nInputPlane, nOutputPlane, 3,3, 1,1, 1,1)) net:add(nn.SpatialBatchNormalization(nOutputPlane,1e-3)) net:add(nn.ReLU(true)) end ConvBNReLU(3,32) ConvBNReLU(32,32) net:add(nn.SpatialMaxPooling(2,2,2,2)) net:add(nn.Dropout(0.2)) ConvBNReLU(32,64) ConvBNReLU(64,64) net:add(nn.SpatialMaxPooling(2,2,2,2)) net:add(nn.Dropout(0.2)) ConvBNReLU(64,128) ConvBNReLU(128,128) net:add(nn.SpatialMaxPooling(2,2,2,2)) net:add(nn.Dropout(0.2)) net:add(nn.View(128*6*6)) net:add(nn.Dropout(0.5)) net:add(nn.Linear(128*6*6,512)) net:add(nn.BatchNormalization(512)) net:add(nn.ReLU(true)) net:add(nn.Linear(512,nbClasses)) net:add(nn.LogSoftMax()) return net end The first layer contains three input channels because we're going to pass RGB images (three channels). For grayscale images, the first layer has one input channel. I encourage you to play around and modify the network.[2] There are a bunch of new modules that need some elaboration. The Dropout module randomly deactivates a neuron with some probability. It is known to help generalization by preventing co-adaptation between neurons; that is, a neuron should now depend less on its peer, forcing it to learn a bit more. BatchNormalization is a very recent development. It is known to speed up convergence by normalizing the outputs of a layer to unit gaussian using the statistics of a batch. Let’s use this model and train it. In the interest of brievity, I'll use these constructs directly. The code describing these constructs is in datasets/gtsrb.lua. DataGen:trainGenerator(batchSize) DataGen:valGenerator(batchSize) These provide iterators over batches of train and test data respectively. You'll find that the model code (models/vgg_small.lua) in the repo is different. It is designed to allow you to experiment quickly. Using optim to train the model Using a stochastic gradient descent (sgd) from the optim package to minimize a function f looks like this: optim.sgd(feval, params, optimState) Where: feval: A user-defined function that respects the API: f, df/params = feval(params) params: The current parameter vector (a 1D torch.Tensor) optimState: A table of parameters, and state variables, dependent upon the algorithm Since we are optimizing the loss of the neural network, parameters should be the weights and other parameters of the network. We get these as a flattened 1D tensor using model:getParameters. It also returns a tensor containing the gradients of these parameters. This is useful in creating the feval function above. model = createModel() criterion = nn.ClassNLLCriterion() -- criterion we are optimizing: negative log loss params, gradParams = model:getParameters() local function feval() -- criterion.output stores the latest output of criterion return criterion.output, gradParams end We need to create an optimState table and initialize it with a configuration of our optimizer like learning rate and momentum: optimState = { learningRate = 0.01, momentum = 0.9, dampening = 0.0, nesterov = true, } Now, an update to the model should do the following: Compute the output of the model using model:forward(). Compute the loss and the gradients at output layer using criterion:forward() and criterion:backward() respectively. Update the gradients of the model parameters using model:backward(). Update the model using optim.sgd. -- Forward pass output = model:forward(input) loss = criterion:forward(output, target) -- Backward pass critGrad = criterion:backward(output, target) model:backward(input, critGrad) -- Updates optim.sgd(feval, params, optimState) Note: The order above should be respected, as backward assumes forward was run just before it. Changing this order might result in gradients not being computed correctly. Putting it all together Let's put it all together and write a function that trains the model for an epoch. We'll create a loop that iterates over the train data in batches and updates the model. model = createModel() criterion = nn.ClassNLLCriterion() dataGen = DataGen('datasets/GTSRB/') -- Data generator params, gradParams = model:getParameters() batchSize = 32 optimState = { learningRate = 0.01, momentum = 0.9, dampening = 0.0, nesterov = true, } function train() -- Dropout and BN behave differently during training and testing -- So, switch to training mode model:training() local function feval() return criterion.output, gradParams end for input, target in dataGen:trainGenerator(batchSize) do -- Forward pass local output = model:forward(input) local loss = criterion:forward(output, target) -- Backward pass model:zeroGradParameters() -- clear grads from previous update local critGrad = criterion:backward(output, target) model:backward(input, critGrad) -- Updates optim.sgd(feval, params, optimState) end end The test function is extremely similar, except that we don't need to update the parameters: confusion = optim.ConfusionMatrix(nbClasses) -- to calculate accuracies function test() model:evaluate() -- switch to evaluate mode confusion:zero() -- clear confusion matrix for input, target in dataGen:valGenerator(batchSize) do local output = model:forward(input) confusion:batchAdd(output, target) end confusion:updateValids() local test_acc = confusion.totalValid * 100 print(('Test accuracy: %.2f'):format(test_acc)) end Now that everything is set, you can train your network and print the test accuracies: max_epoch = 20 for i = 1,20 do train() test() end An epoch takes around 30 seconds on a TitanX and gives about 97.7% accuracy after 20 epochs. This is a very basic model and honestly I haven't tried optimizing the parameters much. There are a lot of things that can be done to crank up the accuracies. Try different processing procedures. Experiment with the net structure. Different weight initializations, and learning rate schedules. An Ensemble of different models; for example, train multiple models and take a majority vote. You can have a look at the state of the art on this dataset here. They achieve upwards of 99.5% accuracy using a clever method to boost the geometric variation of CNNs. Conclusion We looked at how to build a basic mlp in Torch. We then moved on to building a Convolutional Neural Network and trained it to solve a real-world problem of traffic sign recognition. For a beginner, Torch/LUA might not be as easy. But once you get a hang of it, you have access to a deep learning framework which is very flexible yet fast. You will be able to easily reproduce latest research or try new stuff unlike in rigid frameworks like keras or nolearn. I encourage you to give it a fair try if you are going anywhere near deep learning. Resources Torch Cheat Sheet Awesome Torch Torch Blog Facebook's Resnet Code Oxford's ML Course Practicals Learn torch from Github repos About the author Preetham Sreenivas is a data scientist at Fractal Analytics. Prior to that, he was a software engineer at Directi.
Read more
  • 0
  • 0
  • 11302
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-directory-services
Packt
29 Sep 2016
11 min read
Save for later

Directory Services

Packt
29 Sep 2016
11 min read
In this article by Gregory Boyce, author Linux Networking Cookbook, we will focus on getting you started by Configuring Samba as an Active Directory compatible directory service and Joining a Linux box to the domain. (For more resources related to this topic, see here.) If you have worked in corporate environments, then you are probably familiar with a directory service such as Active Directory. What you may not realize is that Samba, originally created to be an open source implementation of Windows file sharing (SMB/CIFS), can now operate as an Active Directory compatible directory service. It can even act as a Backup Domain Controller (BDC) in an Active Directory domain. In this article, we will configure Samba to centralize authentication for your network services. We will also configure a Linux client to leverage it for authentication and set up a RADIUS server, which uses the directory server for authentication. Configuring Samba as an Active Directory compatible directory service As of Samba 4.0, Samba has the ability to act as a primary domain controller (PDC) in a manner that is compatible with Active Directory. How to do it… Installing on Ubuntu 14.04: Configure your system with a static IP address and update /etc/hosts to point to that IP address rather than localhost. Make sure that your time is kept up to date by installing an NTP client: sudo apt-get install ntp Pre-emptively disable smbd/nmbd from running automatically: sudo bash -c 'echo "manual" > /etc/init/nmbd.override' sudo bash –c 'echo "manual" > /etc/init/smbd.override' Install Samba and smbclient: sudo apt-get install samba smbclient Remove stock smb.conf: sudo rm /etc/samba/smb.conf Provision the domain:sudo samba-tool domain provision --realm ad.example.org --domain example --use-rfc2307 --option="interfaces=lo eth1" --option="bind interfaces only=yes" --dns-backend BIND9_DLZ Save the randomly generated admin password. Symlink the AD krb5.conf to /etc: sudo ln -sf /var/lib/samba/private/krb5.conf /etc/krb5.conf Edit /etc/bind/named.conf.local to allow Samba to publish data: dlz "AD DNS Zone" { # For BIND 9.9.0 database "dlopen /usr/lib/x86_64-linux-gnu/samba/bind9/dlz_bind9_9.so"; }; Edit /etc/bind/named.conf.options to use the Kerberos keytab within the options stanza: tkey-gssapi-keytab "/var/lib/samba/private/dns.keytab"; Modify your zone record to allow updates from Samba: zone "example.org" { type master; notify no; file "/var/lib/bind/example.org.db"; update-policy { grant AD.EXAMPLE.ORG ms-self * A AAAA; grant Administrator@AD.EXAMPLE.ORG wildcard * A AAAA SRV CNAME; grant SERVER$@ad.EXAMPLE.ORG wildcard * A AAAA SRV CNAME; grant DDNS wildcard * A AAAA SRV CNAME; }; }; Modify /etc/apparmor.d/usr.sbin.named to allow bind9 access to a few additional resources within the /usr/sbin/named stanza: /var/lib/samba/private/dns/** rw, /var/lib/samba/private/named.conf r, /var/lib/samba/private/named.conf.update r, /var/lib/samba/private/dns.keytab rk, /var/lib/samba/private/krb5.conf r, /var/tmp/* rw, /dev/urandom rw, Reload the apparmor configuration: sudo service apparmor restart Restart bind9: sudo service bind9 restart Restart the samba service: sudo service apparmor restart Installing on CentOS 7: Unfortunately, setting up a domain controller on CentOS 7 is not possible using the default packages provided by the distribution. This is due to Samba utilizing the Heimdal implementation of Kerberos while Red Hat, CentOS, and Fedora using the MIT Kerberos 5 implementation. How it works… The process for provisioning Samba to act as an Active Directory compatible domain is deceptively easy given all that is happening on the backend. Let us look at some of the expectation and see how we are going to meet them as well as what is happening behind the scenes. Active Directory requirements Successfully running an Active Directory Forest has a number of requirements need to be in place: Synchronized time: AD uses Kerberos for authentication, which can be very sensitive to time skews. In our case, we are going to use ntpd, but other options including openntpd or chrony are also available. The ability to manage DNS records: AD automatically generates a number of DNS records, including SRV records that tell clients of the domain how to locate the domain controller itself. A static IP address: Due to a number of pieces of the AD functionality being very dependent on the specific IP address of your domain controller, it is recommended that you use a static IP address. A static DHCP lease may work as long as you are certain the IP address will not change. A rogue DHCP server on the network for example may cause difficulties. Selecting a realm and domain name The Samba team has published some very useful information regarding the proper naming of your realm and your domain along with a link to Microsoft's best practices on the subject. It may be found on: https://wiki.samba.org/index.php/Active_Directory_Naming_FAQ. The short version is that your domain should be globally unique while the realm should be unique within the layer 2 broadcast domain of your network. Preferably, the domain should be a subdomain of a registered domain owned by you. This ensures that you can buy SSL certificates if necessary and you will not experience conflicts with outside resources. Samba-tool will default to using the first part of the domain you specified as the realm, ad from ad.example.org. The Samba group instead recommends using the second part, example in our case, as it is more likely to be locally unique. Using a subdomain of your purchased domain rather than a domain itself makes life easier when splitting internal DNS records, which are managed by your AD instance from the more publicly accessible external names. Using Samba-Tool Samba-tool can work in an automated fashion with command line options, or it can operate in interactive mode. We are going to specify the options that we want to use on the command line: sudo samba-tool domain provision --realm ad.example.org --domain example --use-rfc2307 --option="interfaces=lo eth1" --option="bind interfaces only=yes" --dns-backend BIND9_DLZ The realm and domain options here specify the name for your domain as described above. Since we are going to be supporting Linux systems, we are going to want the AD schema to support RFC2307 settings, which allow for definitions for UID, GID, shell, home directory, and other settings, which Unix systems will require. The pair of options specified on our command-line is used for restricting what interfaces Samba will bind to. While not strictly required, it is a good practice to keep your Samba services bound to the internal interfaces. Finally, samba wants to be able to manage your DNS in order to add systems to the zone automatically. This is handled by a variety of available DNS backends. These include: SAMBA_INTERNAL: This is a built-in method where a samba process acts as a DNS service. This is a good quick option for small networks. BIND9_DLZ: This option allows you to tie your local named/bind9 instance in with your Samba server. It introduces a named plugin for bind versions 9.8.x/9.9.x to support reading host information directly out of the Samba data stores. BIND_FLATFILE: This option is largely deprecated in favor of BIND9_DLZ, but it is still an option if you are running an older version of bind. It causes the Samba services to write out zone files periodically, which Bind may use. Bind configuration Now that Samba is set up to support BIND9_DLZ, we need to configure named to leverage it. There are a few pieces to this support: tkey-gssapi-keytab: This settings in your named options section defines the Kerberos key tab file to use for DNS updates. This allows the Samba server to communicate with the Bind server in order to let it know about zone file changes. dlz setting: This tells bind to load the dynamic module which Samba provides in order to have it read from Samba's data files. Zone updating: In order to be able to update the zone file, you need to switch from an allow-update definition to update-policy, which allows more complex definitions including Kerberos based updates. Apparmor rules changes: Ubuntu uses a Linux Security Module called Apparmor, which allows you to define the allowed actions of a particular executable. Apparmor contains rules restricting the access rights of the named process, but these existing rules do not account for integration with Samba. We need to adjust the existing rules to allow named to access some additional required resources. Joining a Linux box to the domain In order to participate in an AD style domain, you must have the machine joined to the domain using Administrator credentials. This will create the machine's account within the database, and provide credentials to the system for querying the ldap server. How to do it… Install Samba, heimdal-clients, and winbind: sudo apt-get install winbind Populate /etc/samba/smb.conf: [global] workgroup = EXAMPLE realm = ad.example.org security = ads idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = yes winbind enum groups = yes template homedir = /home/%U template shell = /bin/bash winbind use default domain = yes Join the system to the domain: sudo net ads join -U Administrator Configure the system to use winbind for account information in /etc/nsswitch.conf: passwd: compat winbind group: compat winbind How it works… Joining a Linux box to an AD domain, you need to utilize winbind that provides a PAM interface for interacting with Windows RPC calls for user authentication. Winbind requires that you set up your smb.conf file, and then join the domain before it functions. Nsswitch.conf controls how glibc attempts to look up particular types of information. In our case, we are modifying them to talk to winbind for user and group information. Most of the actual logic is in the smb.conf file itself, so let us look: Define the AD Domain we're working with, including both the workgroup/domain and the realm: workgroup = EXAMPLE realm = ad.example.org Now we tell Samba to use Active Directory Services (ADS) security mode: security = ads AD domains use Windows Security IDs (SID) for providing unique user and group identifiers. In order to be compatible with Linux systems, we need to map those SIDs to UIDs and GIDs. Since we're only dealing with a single client for now, we're going to let the local Samba instance map the SIDs to UIDs and GIDs from a range which we provide: idmap uid = 10000-20000 idmap gid = 10000-20000 Some unix utilities such as finger depend on the ability to loop through all of the user/group instances. On a large AD domain, this can be far too many entries so winbind suppresses this capability by default. For now, we're going to want to enable it: winbind enum users = yes winbind enum groups = yes Unless you go through specific steps to populate your AD domain with per-user home directory and shell information, then Winbind will use templates for home directories and shells. We'll want to define these templates in order to avoid the defaults of /home/%D/%U (/home/EXAMPLE/user) and /bin/false: template homedir = /home/%U template shell = /bin/bash The default winbind configuration takes users in the form of username@example.org rather than the more Unix style of user username. Let's override that setting: winbind use default domain = yes Joining a windows box to the domain While not a Linux configuration topic, the most common use for an Active Directory domain is to manage a network of Windows systems. While the overarching topic of managing windows via an AD domain is too large and out of scope for this articl, let's look at how we can join a Windows system to our new domain. How to do it… Click on Start and go to Settings. Click on System. Select About. Select Join a Domain. Type in the name of your domain; ad.example.org in our case. Enter your administrator credentials for the domain. Select a user who will own the system. How it works… When you tell your Windows system to join an AD Domain, it first attempts to find the domain by looking up a series SRV record for the domain, including _ldap._tcp.dc._msdcs.ad.example.org in order to determine what hosts to connect to within the domain for authentication purposes. From there a connection is established. Resources for Article: Further resources on this subject: OpenStack Networking in a Nutshell [article] Zabbix Configuration [article] Supporting hypervisors by OpenNebula [article]
Read more
  • 0
  • 0
  • 11497

article-image-how-to-apply-themes-sails-applications-part-1
Luis Lobo
29 Sep 2016
8 min read
Save for later

How to Apply Themes to Sails Applications, Part 1

Luis Lobo
29 Sep 2016
8 min read
The Sails Framework is a popular MVC framework that is designed for building practical, production-ready Node.js apps. Themes customize the look and feel of your app, but Sails does not come with a configuration or setting for handling themes by itself. This two-part post shows one of the ways you can set up theming for your Sails application, thus making use of some of Sails’ capabilities. You may have an application that needs to handle theming for different reasons, like custom branding, licensing, dynamic theme configuration, and so on. You can adjust the theming of your application, based on external factors, like patterns in the domain of the site you are browsing. Imagine you have an application that handles deliveries that you customize per client. So, your app renders the default theme when browsed as http://www.smartdelivery.com, but when logged in as a customer, let's say, "Burrito", it changes the domain name as http://burrito.smartdelivery.com. In this series we make use of Less as our language to define our CSS. Sails already handles Less right out of the box. The default Less file is located in /assets/styles/importer.less. We will also use Bootstrap as our base CSS Framework, importing its Less file into our importer.less file. The technique showed here consists of having a base CSS, and a theme CSS that varies according to the host name. Step 1 - Adding Bootstrap to Sails We use Bower to add Bootstrap to our project. First, install it by issuing the following command: npm install bower --save-dev Then, initialize the Bower configuration file. node_modules/bower/bin/bower init This command allows us to configure our bower.json file. Answer the questions asked by bower. ? name sails-themed-application ? description Sails Themed Application ? main file app.js ? keywords ? authors lobo ? license MIT ? homepage ? set currently installed components as dependencies? Yes ? add commonly ignored files to ignore list? Yes ? would you like to mark this package as private which prevents it from being accidentally published to the registry? No { name: 'sails-themed-application', description: 'Sails Themed Application', main: 'app.js', authors: [ 'lobo' ], license: 'MIT', homepage: '', ignore: [ '**/.*', 'node_modules', 'bower_components', 'assets/vendor', 'test', 'tests' ] } This generates a bower.json file in the root of your project. Now we need to tell bower to install everything in a specific directory. Create a file named .bowerrc and put this configuration into it: {"directory" : "assets/vendor"} Finally, install Bootstrap: node_modules/bower/bin/bower install bootstrap --save --production This action creates a folder in assets named vendor, with boostrap inside of it. Since Bootstrap uses JQuery, you also have a jquery folder: ├── api │ ├── controllers │ ├── models │ ├── policies │ ├── responses │ └── services ├── assets │ ├── images │ ├── js │ │ └── dependencies │ ├── styles │ ├── templates │ ├── themes │ └── vendor │ ├── bootstrap │ │ ├── dist │ │ │ ├── css │ │ │ ├── fonts │ │ │ └── js │ │ ├── fonts │ │ ├── grunt │ │ ├── js │ │ ├── less │ │ │ └── mixins │ │ └── nuget │ └── jquery │ ├── dist │ ├── external │ │ └── sizzle │ │ └── dist │ └── src │ ├── ajax │ │ └── var │ ├── attributes │ ├── core │ │ └── var │ ├── css │ │ └── var │ ├── data │ │ └── var │ ├── effects │ ├── event │ ├── exports │ ├── manipulation │ │ └── var │ ├── queue │ ├── traversing │ │ └── var │ └── var ├── config │ ├── env │ └── locales ├── tasks │ ├── config │ └── register └── views We need now to add Bootstrap into our importer. Edit /assets/styles/importer.less and add this instruction at the end of it: @import "../vendor/bootstrap/less/bootstrap.less"; Now you need to tell Sails where to import Bootstrap and JQuery JavaScript files from. Edit /tasks/pipeline.js and add the following code after it loads the sails.io.js file: // Load sails.io before everything else 'js/dependencies/sails.io.js', // <ADD THESE LINES> // JQuery JS 'vendor/jquery/dist/jquery.min.js', // Bootstrap JS 'vendor/bootstrap/dist/js/bootstrap.min.js', // </ADD THESE LINES> Now you have to edit your views layout and pages to use the Bootstrap style. In this series I created an application from scratch, so I have the default views and layouts. In your layout, insert the following line after your tag: <link rel="stylesheet" href="/themes/<%= typeof theme == 'undefined' ? 'default' : theme %>.css"> This loads a second CSS file, which defaults to /themes/default.css, into your views. As a sample, here are the /views/layout.ejs and /views/homepage.ejs I changed (the text under the headings is random text): /views/layout.ejs <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1"> <!-- The above 3 meta tags *must* come first in the head; any other head content must come *after* these tags --> <title><%= typeof title == 'undefined' ? 'Sails Themed Application' : title %></title> <!--STYLES--> <link rel="stylesheet" href="/styles/importer.css"> <!--STYLES END--> <!-- THIS IS WHERE THE THEME CSS IS LOADED --> <link rel="stylesheet" href="/themes/<%= typeof theme == 'undefined' ? 'default' : theme %>.css"> </head> <body> <%- body %> <!--TEMPLATES--> <!--TEMPLATES END--> <!--SCRIPTS--> <script src="/js/dependencies/sails.io.js"></script> <script src="/vendor/jquery/dist/jquery.min.js"></script> <script src="/vendor/bootstrap/dist/js/bootstrap.min.js"></script> <!--SCRIPTS END--> </body> </html> Notice the lines after the <!--STYLES END--> tag. /views/homepage.ejs <nav class="navbar navbar-inverse navbar-fixed-top"> <div class="container"> <div class="navbar-header"> <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#navbar" aria-expanded="false" aria-controls="navbar"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class="navbar-brand" href="#">Project name</a> </div> <div id="navbar" class="navbar-collapse collapse"> <form class="navbar-form navbar-right"> <div class="form-group"> <input type="text" placeholder="Email" class="form-control"> </div> <div class="form-group"> <input type="password" placeholder="Password" class="form-control"> </div> <button type="submit" class="btn btn-success">Sign in</button> </form> </div><!--/.navbar-collapse --> </div> </nav> <!-- Main jumbotron for a primary marketing message or call to action --> <div class="jumbotron"> <div class="container"> <h1>Hello, world!</h1> <p>This is a template for a simple marketing or informational website. It includes a large callout called a jumbotron and three supporting pieces of content. Use it as a starting point to create something more unique.</p> <p><a class="btn btn-primary btn-lg" href="#" role="button">Learn more &raquo;</a></p> </div> </div> <div class="container"> <!-- Example row of columns --> <div class="row"> <div class="col-md-4"> <h2>Heading</h2> <p>Donec id elit non mi porta gravida at eget metus. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus. Etiam porta sem malesuada magna mollis euismod. Donec sed odio dui. </p> <p><a class="btn btn-default" href="#" role="button">View details &raquo;</a></p> </div> <div class="col-md-4"> <h2>Heading</h2> <p>Donec id elit non mi porta gravida at eget metus. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus. Etiam porta sem malesuada magna mollis euismod. Donec sed odio dui. </p> <p><a class="btn btn-default" href="#" role="button">View details &raquo;</a></p> </div> <div class="col-md-4"> <h2>Heading</h2> <p>Donec sed odio dui. Cras justo odio, dapibus ac facilisis in, egestas eget quam. Vestibulum id ligula porta felis euismod semper. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus.</p> <p><a class="btn btn-default" href="#" role="button">View details &raquo;</a></p> </div> </div> <hr> <footer> <p>&copy; 2015 Company, Inc.</p> </footer> </div> <!-- /container --> You can now lift Sails and see your Bootstrapped Sails application. Now that we have our Bootstrapped Sails app set up, in Part 2 we will compile our theme’s CSS and the necessary Less files, and we will set bup the theme Sails hook to complete our application. About the author Luis Lobo Borobia is the CTO at FictionCity.NET, is a mentor and advisor, independent software engineer consultant, and conference speaker. He has a background as a software analyst and designer, creating, designing, and implementing Software products and solutions, frameworks, and platforms for several kinds of industries. In the last years he has focused on research and development for the Internet of Things, using the latest bleeding-edge software and hardware technologies available.
Read more
  • 0
  • 1
  • 9679

article-image-getting-started-keystonejs
Jake Stockwin
29 Sep 2016
5 min read
Save for later

Getting Started with KeystoneJS

Jake Stockwin
29 Sep 2016
5 min read
KeystoneJS is a content management framework for node.js. It is an easy-to-use system that does all the hard work of making a website for you. This article works through a simple example to get you started with KeystoneJS. Initial Setup KeystoneJS comes paired with a generator to make setup simple. You'll need to have node.js and mongodb installed before you begin. To generate your site, all you need to do is run npm install -g generator-keystone and then yo keystone. You'll be asked a few questions, and after a while your site is ready. Running node keystone, you'll find a site with a readymade blog, gallery and contact form, but the main feature is of KeystoneJS is the admin UI. Navigate to localhost:3000/keystone and sign in with the default credentials and you'll be able to manage all the content on your site from a user-friendly interface. Take a look around your site and the code so that you're familiar with it, and it's also worth having a read through the documentation. Keystone Models You now have a site up and running, but what if you need more than just a blog and a gallery? Perhaps, you would like a page to display upcoming events. No problem, to achieve this we create a model. Open the models folder in your file browser and you will be able to see the existing models, User.js for example. We're going to add our own model; create a new file called Event.js in the models folder. Say, our event should have both a start and an end time, a name and a description. Then our model will look like this: var keystone = require('keystone'); var Types = keystone.Field.Types; var Event = new keystone.List('Event'); Event.add({ name: { type: Types.Name, required: true, index: true }, description: { type: Types.Textarea }, start: { type: Types.Datetime }, end: { type: Types.Datetime } }); Event.register(); Now restart your app. Under the hood, KeystoneJS is managing all the database schemas for you, and if you sign back in to the admin, UI you'll see that there is now a page to manage your events. All that was required was to create a new model, and Keystone wrote the entire backend for you—this shows the power of Keystone. You don't have to spend your time writing the backend for your site, and are free to focus on the client-facing side of things. Routes and Templates We have created our model and are now able to log in to the admin UI and manage our events. However, we still need to display these events to our website viewers. This is done in two parts; a route is used to obtain the data from the database and makes this data available to the template, which displays the data. First, create the route. Open a new file, routes/views/events.js, and enter the following code: var keystone = require('keystone'); exports = module.exports = function(req, res) { var view = new keystone.View(req, res); var locals = res.locals; // Set locals locals.section = 'events'; // Load the events view.on('init', function(next) { var q = keystone.list('Event').model.find(); q.exec(function(err, results) { locals.data.events = results; next(err); }); }); // Render the view view.render('post'); }; You can now create your template. The events will be available to the template as data.events, because we have set locals.data.events in the route. KeystoneJS gives you the option of which template engine to use. The default is jade, so we will use this as the example here, but you can easily adapt the code to any other engine, and if you get stuck, a good place to start is the blog post template. Templates are stored in templates/views, so create templates/views/events.js with the following code: extends ../layouts/default mixin event(event) h2 event.name p if event.start | start: #{event._.start.format('MMMM Do, YYYY')} p if event.end | end: #{event._.end.format('MMMM Do, YYYY')} p if event.description | details: event.description block content .container: .row .events each event in data.events +event(event) This is by no means a well-designed page, but will do for this example. We're almost done, but if you go to /events in your web browser, you'll get a 404 error. That's because we haven't told our route controllers about the new page yet. This is done in routes/index.js and you just need to add the line app.get('/events', routes.views.events);. This tells your app to send any get requests for /events to your new route, which in turn renders the new template. You can also add your new events page to your header by simply adding { label: 'Events',      key: 'events',     href: '/events' }, to routes/middleware.js. The key in this should match the res.locals.section in the route we created. Conclusion By simply running yo keystone and adding just over 50 lines of code, we've created an events page to display our events. You can log in to the admin UI and create, update and delete events; and your website will update automatically. This really highlights what keystone does. We don’t have to spend our time configuring all the node modules and writing the backend of our server; keystone has done all the work for us. This means we can dedicate all our time to making our client-facing website look as good as possible. About the Author Jake Stockwin is a third-year mathematics and statistics undergraduate at the University of Oxford, and a novice full-stack developer. He has a keen interest in programming, both in his academic studies and in his spare time. Next year, he plans to write his dissertation on reinforcement learning, an area of machine learning. Over the past few months, he has designed websites for various clients and has begun developing in Node.js.
Read more
  • 0
  • 0
  • 2862

article-image-google-forms-multiple-choice-and-fill-blank-assignments
Packt
28 Sep 2016
14 min read
Save for later

Google Forms for Multiple Choice and Fill-in-the-blank Assignments

Packt
28 Sep 2016
14 min read
In this article by Michael Zhang, the author of the book Teaching with Google Classroom, we will see how to create multiple choice and fill-in-the-blank assignments using Google Forms. (For more resources related to this topic, see here.) The third-party app, Flubaroo will help grade multiple choice, fill-in-the-blank, and numeric questions. Before you can use Flubaroo, you will need to create the assignment and deploy it on Google Classroom. The Form app of Google within the Google Apps for Education (GAFE) allows you to create online surveys, which you can use as assignments. Google Forms then outputs the values of the form into a Google Sheet, where the Google Sheet add-on, Flubaroo grades the assignment. After using Google Forms and Flubaroo for assignments, you may decide to also use it for exams. However, while Google Forms provides a means for creating the assessment and Google Classroom allows you to easily distribute it to your students, there is no method to maintain security of the assessment. Therefore, if you choose to use this tool for summative assessment, you will need to determine an appropriate level of security. (Often, there is nothing that prevents students from opening a new tab and searching for an answer or from messaging classmates.) For example, in my classroom, I adjusted the desks so that there was room at the back of the classroom to pace during a summative assessment. Additionally, some school labs include a teacher's desktop that has software to monitor student desktops. Whatever method you choose, take precautions to ensure the authenticity of student results when assessing students online. Google Forms is a vast Google App that requires its own book to fully explore its functionality. Therefore, the various features you will explore in this article will focus on the scope of creating and assessing multiple choice and fill-in-the-blank assignments. However, once you are familiar with Google Forms, you will find additional applications. For example, in my school, I work with the administration to create forms to collect survey data from stakeholders such as staff, students, and parents. Recently, for our school's annual Open House, I created a form to record the number of student volunteers so that enough food for the volunteers would be ordered. Also, during our school's major fundraiser, I developed a Google Form for students to record donations so that reports could be generated from the information more quickly than ever before. The possibilities of using Google Forms within a school environment are endless! In this article, you will explore the following topics: Creating an assignment with Google Forms Installing the Flubaroo Google Sheets add-on Assessing an assignment with Flubaroo Creating a Google Form Since Google Forms is not as well known as apps such as Gmail or Google Calendar, it may not be immediately visible in the App Launcher. To create a Google Form, follow these instructions: In the App Launcher, click on the More section at the bottom: Click on the Google Forms icon: If there is still no Google Forms app icon, open a new tab and type forms.google.com into the address bar. Click on the Blank template to create a new Google Form: Google Forms has a recent update to Google's Material Design interface. This article will use screenshots from the new Google Forms look. Therefore, if you see a banner with Try the new Google Forms, click on the banner to launch the new Google Forms App: To name the Google Form, click on Untitled form in the top-left corner and type in the name. This will also change the name of the form. If necessary, you can click on the form title to change the title afterwards: Optionally, you can add a description to the Google Form directly below the form title: Often, I use the description to provide further instructions or information such as time limit, whether dictionaries or other reference books are permissible or even website addresses to where they can find information related to the assignment. Adding questions to a Google form By default, each new Google Form will already have a multiple choice card inserted into the form. In order to access the options, click on anywhere along the white area beside Untitled Question: The question will expand to form a question card where you can make changes to the question: Type the question stem in the Untitled Question line. Then, click on Option 1 to create a field to change it to a selection: To add additional selectors, click on the Add option text below the current selector or simply press the Enter key on the keyboard to begin the next selector. Because of the large number of options in a question card, the following screenshot provides a brief description of these options: A: Question title B: Question options C: Move the option indicator. Hovering your mouse over an option will show this indicator that you can click and drag to reorder your options. D: Move the question indicator. Clicking and dragging this indicator will allow you to reorder your questions within the assignment. E: Question type drop-down menu. There are several types of questions you can choose from. However, not all will work with the Flubaroo grading add-on. The following screenshot displays all question types available: F: Remove option icon. G: Duplicate question button. Google Forms will make a copy of the current question. H: Delete question button. I: Required question switch. By enabling this option, students must answer this question in order to complete the assignment. J: More options menu. Depending on the type of question, this section will provide options to enable a hint field below the question title field, create nonlinear multiple choice assignments, and validate data entered into a specific field. Flubaroo grades the assignment from the Google Sheet that Google Forms creates. It matches the responses of the students with an answer key. While there is tolerance for case sensitivity and a range of number values, it cannot effectively grade answers in the sentence or paragraph form. Therefore, use only short answers for the fill-in-the-blank or numerical response type questions and avoid using paragraph questions altogether for Flubaroo graded assignments. Once you have completed editing your question, you can use the side menu to add additional questions to your assignment. You can also add section headings, images, YouTube videos, and additional sections to your assignment. The following screenshot provides a brief legend for the icons:   To create a fill-in-the-blank question, use the short answer question type. When writing the question stem, use underscores to indicate where the blank is in the question. You may need to adjust the wording of your fill-in-the-blank questions when using Google Forms. Following is an example of a fill-in-the-blank question: Identify your students Be sure to include fields for your students name and e-mail address. The e-mail address is required so that Flubaroo can e-mail your student their responses when complete. Google Forms within GAFE also has an Automatically collect the respondent's username option in the Google Form's settings, found in the gear icon. If you use the automatic username collection, you do not need to include the name and e-mail fields. Changing the theme of a Google form Once you have all the questions in your Google Form, you can change the look and feel of the Google Form. To change the theme of your assignment, use the following steps: Click on the paint pallet icon in the top-right corner of the Google Form: For colors, select the desired color from the options available. If you want to use an image theme, click on the image icon at the bottom right of the menu: Choose a theme image. You can narrow the type of theme visible by clicking on the appropriate category in the left sidebar: Another option is to upload your own image as the theme. Click on the Upload photos option in the sidebar or select one image from your Google Photos using the Your Albums option. The application for Google Forms within the classroom is vast. With the preceding features, you can add images and videos to your Google Form. Furthermore, in conjunction with the Google Classroom assignments, you can add both a Google Doc and a Google Form to the same assignment. An example of an application is to create an assignment in Google Classroom where students must first watch the attached YouTube video and then answer the questions in the Google Form. Then Flubaroo will grade the assignment and you can e-mail the students their results. Assigning the Google Form in Google classroom Before you assign your Google Form to your students, preview the form and create a key for the assignment by filling out the form first. By doing this first, you will catch any errors before sending the assignment to your students, and it will be easier to find when you have to grade the assignment later. Click on the eye shaped preview icon in the top-right corner of the Google form to go to the live form: Fill out the form with all the correct answers. To find this entry later, I usually enter KEY in the name field and my own e-mail address for the e-mail field. Now the Google Form is ready to be assigned in Google Classroom. In Google Classroom, once students have submitted a Google Form, Google Classroom will automatically mark the assignment as turned in. Therefore, if you are adding multiple files to an assignment, add the Google Form last and avoid adding multiple Google Forms to a single assignment. To add a Google Form to an assignment, follow these steps: In the Google Classroom assignment, click on the Google Drive icon: Select the Google Form and click on the Add button: Add any additional information and assign the assignment. Installing Flubaroo Flubaroo, like Goobric and Doctopus, is a third-party app that provides additional features that help save time grading assignments. Flubaroo requires a one-time installation into Google Sheets before it can grade Google Form responses. While we can install the add-on in any Google Sheet, the following steps will use the Google Sheet created by Google Forms: In the Google Form, click on the RESPONSES tab at the top of the form: Click on the Google Sheets icon: A pop-up will appear. The default selection is to create a new Google Sheet. Click on the CREATE button: A new tab will appear with a Google Sheet with the Form's responses. Click on the Add-ons menu and select Get add-ons…: Flubaroo is a popular add-on and may be visible in the first few apps to click on. If not, search for the app with the search field and then click on it in the search results. Click on the FREE button: The permissions pop-up will appear. Scroll to the bottom and click on the Allow button to activate Flubaroo: A pop-up and sidebar will appear in Google Sheets to provide announcements and additional instructions to get started: Assessing using Flubaroo When your students have submitted their Google Form assignment, you can grade them with Flubaroo. There are two different settings for grading with it—manual and automatic. Manual grading will only grade responses when you initiate the grading; whereas, automatic grading will grade responses as they are submitted. Manual grading To assess a Google Form assignment with Flubaroo, follow these steps: If you have been following along from the beginning of the article, select Grade Assignment in the Flubaroo submenu of the Add-ons menu: If you have installed Flubaroo in a Google Sheet that is not the form responses, you will need to first select Enable Flubaroo in this sheet in the Flubaroo submenu before you will be able to grade the assignment: A pop-up will guide you through the various settings of Flubaroo. The first page is to confirm the columns in the Google Sheet. Flubaroo will guess whether the information in a column identifies the student or is graded normally. Under the Grading Options drop-down menu, you can also select Skip Grading or Grade by Hand. If the question is undergoing normal grading, you can choose how many points each question is worth. Click on the Continue button when all changes are complete: In my experience, Flubaroo accurately guesses which fields identify the student. Therefore, I usually do not need to make changes to this screen unless I am skipping questions or grading certain ones by hand. The next page shows all the submissions to the form. Click on the radio button beside the submission that is the key and then click on the Continue button: Flubaroo will show a spinning circle to indicate that it is grading the assignment. It will finish when you see the following pop-up: When you close the pop-up, you will see a new sheet created in the Google Sheet summarizing the results. You will see the class average, the grades of individual students as well as the individual questions each student answered correctly: Once Flubaroo grades the assignment, you can e-mail students the results. In the Add-ons menu, select Share Grades under the Flubaroo submenu: A new pop-up will appear. It will have options to select the appropriate column for the e-mail of each submission, the method to share grades with the students, whether to list the questions so that students know which questions they got right and which they got wrong, whether to include an answer key, and a message to the students. The methods to share grades include e-mail, a file in Google Drive, or both. Once you have chosen your selections, click on the Continue button: A pop-up will confirm that the grades have successfully been e-mailed. Google Apps has a daily quota of 2000 sent e-mails (including those sent in Gmail or any other Google App). While normally not an issue. If you are using Flubaroo on a large scale, such as a district-wide Google Form, this limit may prevent you from e-mailing results to students. In this case, use the Google Drive option instead. If needed, you can regrade submissions. By selecting this option in the Flubaroo submenu, you will be able to change settings, such as using a different key, before Flubaroo will regrade all the submissions. Automatic grading Automatic grading provides students with immediate feedback once they submit their assignments. You can enable automatic grading after first setting up manual grading so that any late assignments get graded. Or you can enable automatic grading before assigning the assignment. To enable automatic grading on a Google Sheet that has already been manually graded, select Enable Autograde from the Advanced submenu of Flubaroo, as shown in the following screenshot: A pop-up will appear allowing you to update the grading or e-mailing settings that were set during the manual grading. If you select no, then you will be taken through all the pop-up pages from the Manual grading section so that you can make necessary changes. If you have not graded the assignment manually, when you select Enable Autograde, you will be prompted by a pop-up to set up grading and e-mailing settings, as shown in the following screenshot. Clicking on the Ok button will take you through the setting pages shown in the preceding Manual grading section: Summary In this article, you learned how to create a Google Form, assign it in Google Classroom, and grade it with the Google Sheet's Flubaroo add-on. Using all these apps to enhance Google Classroom shows how the apps in the GAFE suite interact with each other to provide a powerful tool for you. Resources for Article: Further resources on this subject: Mapping Requirements for a Modular Web Shop App [article] Using Spring JMX within Java Applications [article] Fine Tune Your Web Application by Profiling and Automation [article]
Read more
  • 0
  • 2
  • 37110
article-image-unsupervised-learning
Packt
28 Sep 2016
11 min read
Save for later

Unsupervised Learning

Packt
28 Sep 2016
11 min read
In this article by Bastiaan Sjardin, Luca Massaron, and Alberto Boschetti, the authors of the book Large Scale Machine Learning with Python, we will try to create new features and variables at scale in the observation matrix. We will introduce the unsupervised methods and illustrate principal component analysis (PCA)—an effective way to reduce the number of features. (For more resources related to this topic, see here.) Unsupervised methods Unsupervised learning is a branch of machine learning whose algorithms reveal inferences from data without an explicit label (unlabeled data). The goal of such techniques is to extract hidden patterns and group similar data. In these algorithms, the unknown parameters of interests of each observation (the group membership and topic composition, for instance) are often modeled as latent variables (or a series of hidden variables), hidden in the system of observed variables that cannot be observed directly, but only deduced from the past and present outputs of the system. Typically, the output of the system contains noise, which makes this operation harder. In common problems, unsupervised methods are used in two main situations: With labeled datasets to extract additional features to be processed by the classifier/regressor down to the processing chain. Enhanced by additional features, they may perform better. With labeled or unlabeled datasets to extract some information about the structure of the data. This class of algorithms is commonly used during the Exploratory Data Analysis (EDA) phase of the modeling. First at all, before starting with our illustration, let's import the modules that will be necessary along the article in our notebook: In : import matplotlib import numpy as np import pandas as pd import matplotlib.pyplot as plt from matplotlib import pylab %matplotlib inline import matplotlib.cm as cm import copy import tempfile import os   Feature decomposition – PCA PCA is an algorithm commonly used to decompose the dimensions of an input signal and keep just the principal ones. From a mathematical perspective, PCA performs an orthogonal transformation of the observation matrix, outputting a set of linear uncorrelated variables, named principal components. The output variables form a basis set, where each component is orthonormal to the others. Also, it's possible to rank the output components (in order to use just the principal ones) as the first component is the one containing the largest possible variance of the input dataset, the second is orthogonal to the first (by definition) and contains the largest possible variance of the residual signal, and the third is orthogonal to the first two and contains the largest possible variance of the residual signal, and so on. A generic transformation with PCA can be expressed as a projection to a space. If just the principal components are taken from the transformation basis, the output space will have a smaller dimensionality than the input one. Mathematically, it can be expressed as follows: Here, X is a generic point of the training set of dimension N, T is the transformation matrix coming from PCA, and  is the output vector. Note that the symbol indicates a dot product in this matrix equation. From a practical perspective, also note that all the features of X must be zero-centered before doing this operation. Let's now start with a practical example; later, we will explain math PCA in depth. In this example, we will create a dummy dataset composed of two blobs of points—one cantered in (-5, 0) and the other one in (5,5).Let's use PCA to transform the dataset and plot the output compared to the input. In this simple example, we will use all the features, that is, we will not perform feature reduction: In:from sklearn.datasets.samples_generator import make_blobs from sklearn.decomposition import PCA X, y = make_blobs(n_samples=1000, random_state=101, centers=[[-5, 0], [5, 5]]) pca = PCA(n_components=2) X_pca = pca.fit_transform(X) pca_comp = pca.components_.T test_point = np.matrix([5, -2]) test_point_pca = pca.transform(test_point) plt.subplot(1, 2, 1) plt.scatter(X[:, 0], X[:, 1], c=y, edgecolors='none') plt.quiver(0, 0, pca_comp[:,0], pca_comp[:,1], width=0.02, scale=5, color='orange') plt.plot(test_point[0, 0], test_point[0, 1], 'o') plt.title('Input dataset') plt.subplot(1, 2, 2) plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y, edgecolors='none') plt.plot(test_point_pca[0, 0], test_point_pca[0, 1], 'o') plt.title('After "lossless" PCA') plt.show()   As you can see, the output is more organized than the original features' space and, if the next task is a classification, it would require just one feature of the dataset, saving almost 50% of the space and computation needed. In the image, you can clearly see the core of PCA: it's just a projection of the input dataset to the transformation basis drawn in the image on the left in orange. Are you unsure about this? Let's test it: In:print "The blue point is in", test_point[0, :] print "After the transformation is in", test_point_pca[0, :] print "Since (X-MEAN) * PCA_MATRIX = ", np.dot(test_point - pca.mean_, pca_comp) Out:The blue point is in [[ 5 -2]] After the transformation is in [-2.34969911 -6.2575445 ] Since (X-MEAN) * PCA_MATRIX = [[-2.34969911 -6.2575445 ]   Now, let's dig into the core problem: how is it possible to generate T from the training set? It should contain orthonormal vectors, and the vectors should be ranked according the quantity of variance (that is, the energy or information carried by the observation matrix) that they can explain. Many solutions have been implemented, but the most common implementation is based on Singular Value Decomposition (SVD). SVD is a technique that decomposes any matrix M into three matrixes () with special properties and whose multiplication gives back M again: Specifically, given M, a matrix of m rows and n columns, the resulting elements of the equivalence are as follows: U is a matrix m x m (square matrix), it's unitary, and its columns form an orthonormal basis. Also, they're named left singular vectors, or input singular vectors, and they're the eigenvectors of the matrix product .  is a matrix m x n, which has only non-zero elements on its diagonal. These values are named singular values, are all non-negative, and are the eigenvalues of both  and . W is a unitary matrix n x n (square matrix), its columns form an orthonormal basis, and they're named right (or output) singular vectors. Also, they are the eigenvectors of the matrix product . Why is this needed? The solution is pretty easy: the goal of PCA is to try and estimate the directions where the variance of the input dataset is larger. For this, we first need to remove the mean from each feature and then operate on the covariance matrix . Given that, by decomposing the matrix X with SVD, we have the columns of the matrix W that are the principal components of the covariance (that is, the matrix T we are looking for), the diagonal of  that contains the variance explained by the principal components, and the columns of U the principal components. Here's why PCA is always done with SVD. Let's see it now on a real example. Let's test it on the Iris dataset, extracting the first two principal components (that is, passing from a dataset composed by four features to one composed by two): In:from sklearn import datasets iris = datasets.load_iris() X = iris.data y = iris.target print "Iris dataset contains", X.shape[1], "features" pca = PCA(n_components=2) X_pca = pca.fit_transform(X) print "After PCA, it contains", X_pca.shape[1], "features" print "The variance is [% of original]:", sum(pca.explained_variance_ratio_) plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y, edgecolors='none') plt.title('First 2 principal components of Iris dataset') plt.show() Out:Iris dataset contains 4 features After PCA, it contains 2 features The variance is [% of original]: 0.977631775025 This is the analysis of the outputs of the process: The explained variance is almost 98% of the original variance from the input. The number of features has been halved, but only 2% of the information is not in the output, hopefully just noise. From a visual inspection, it seems that the different classes, composing the Iris dataset, are separated from each other. This means that a classifier working on such a reduced set will have comparable performance in terms of accuracy, but will be faster to train and run prediction. As a proof of the second point, let's now try to train and test two classifiers, one using the original dataset and another using the reduced set, and print their accuracy: In:from sklearn.linear_model import SGDClassifier from sklearn.cross_validation import train_test_split from sklearn.metrics import accuracy_score def test_classification_accuracy(X_in, y_in): X_train, X_test, y_train, y_test = train_test_split(X_in, y_in, random_state=101, train_size=0.50) clf = SGDClassifier('log', random_state=101) clf.fit(X_train, y_train) return accuracy_score(y_test, clf.predict(X_test)) print "SGDClassifier accuracy on Iris set:", test_classification_accuracy(X, y) print "SGDClassifier accuracy on Iris set after PCA (2 compo-nents):", test_classification_accuracy(X_pca, y) Out:SGDClassifier accuracy on Iris set: 0.586666666667 SGDClassifier accuracy on Iris set after PCA (2 components): 0.72 As you can see, this technique not only reduces the complexity and space of the learner down in the chain, but also helps achieve generalization (exactly as a Ridge or Lasso regularization). Now, if you are unsure how many components should be in the output, typically as a rule of thumb, choose the minimum number that is able to explain at least 90% (or 95%) of the input variance. Empirically, such a choice usually ensures that only the noise is cut off. So far, everything seems perfect: we found a great solution to reduce the number of features, building some with very high predictive power, and we also have a rule of thumb to guess the right number of them. Let's now check how scalable this solution is: we're investigating how it scales when the number of observations and features increases. The first thing to note is that the SVD algorithm, the core piece of PCA, is not stochastic; therefore, it needs the whole matrix in order to be able to extract its principal components. Now, let's see how scalable PCA is in practice on some synthetic datasets with an increasing number of features and observations. We will perform a full (lossless) decomposition (the augment while instantiating the object PCA is None), as asking for a lower number of features doesn't impact the performance (it's just a matter of slicing the output matrixes of SVD). In the following code, we first create matrices with 10 thousand points and 20, 50, 100, 250, 1,000, and 2,500 features to be processed by PCA. Then, we create matrixes with 100 features and 1, 5, 10, 25, 50, and 100 thousands observations to be processed with PCA: In:import time def check_scalability(test_pca): pylab.rcParams['figure.figsize'] = (10, 4) # FEATURES n_points = 10000 n_features = [20, 50, 100, 250, 500, 1000, 2500] time_results = [] for n_feature in n_features: X, _ = make_blobs(n_points, n_features=n_feature, random_state=101) pca = copy.deepcopy(test_pca) tik = time.time() pca.fit(X) time_results.append(time.time()-tik) plt.subplot(1, 2, 1) plt.plot(n_features, time_results, 'o--') plt.title('Feature scalability') plt.xlabel('Num. of features') plt.ylabel('Training time [s]') # OBSERVATIONS n_features = 100 n_observations = [1000, 5000, 10000, 25000, 50000, 100000] time_results = [] for n_points in n_observations: X, _ = make_blobs(n_points, n_features=n_features, random_state=101) pca = copy.deepcopy(test_pca) tik = time.time() pca.fit(X) time_results.append(time.time()-tik) plt.subplot(1, 2, 2) plt.plot(n_observations, time_results, 'o--') plt.title('Observations scalability') plt.xlabel('Num. of training observations') plt.ylabel('Training time [s]') plt.show() check_scalability(PCA(None)) Out: As you can clearly see, PCA based on SVD is not scalable: if the number of features increases linearly, the time needed to train the algorithm increases exponentially. Also, the time needed to process a matrix with a few hundred observations becomes too high and (not shown in the image) the memory consumption makes the problem unfeasible for a domestic computer (with 16 or less GB of RAM).It seems clear that a PCA based on SVD is not the solution for big data: fortunately, in the recent years, many workarounds have been introduced. Summary In this article, we've introduced a popular unsupervised learner able to scale to cope with big data. PCA is able to reduce the number of features by creating ones containing the majority of variance (that is, the principal ones). You can also refer the following books on the similar topics: R Machine Learning Essentials: https://www.packtpub.com/big-data-and-business-intelligence/r-machine-learning-essentials R Machine Learning By Example: https://www.packtpub.com/big-data-and-business-intelligence/r-machine-learning-example Machine Learning with R - Second Edition: https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-r-second-edition Resources for Article: Further resources on this subject: Machine Learning Tasks [article] Introduction to Clustering and Unsupervised Learning [article] Clustering and Other Unsupervised Learning Methods [article]
Read more
  • 0
  • 0
  • 3359

article-image-extra-extra-collection-and-closure-changes-rock
Packt
28 Sep 2016
19 min read
Save for later

Extra, Extra Collection, and Closure Changes that Rock!

Packt
28 Sep 2016
19 min read
In this article by Keith Elliott, author of, Swift 3 New Features , we are focusing on collection and closure changes in Swift 3. There are several nice additions that will make working with collections even more fun. We will also explore some of the confusing side effects of creating closures in Swift 2.2 and how those have been fixed in Swift 3. (For more resources related to this topic, see here.) Collection and sequence type changes Let’s begin our discussion with Swift 3 changes to Collection and Sequence types. Some of the changes are subtle and others are bound to require a decent amount of refactoring to your custom implementations. Swift provides three main collection types for warehousing your values: arrays, dictionaries, and sets. Arrays allow you to store values in an ordered list. Dictionaries provide unordered the key-value storage for your collections. Finally, sets provide an unordered list of unique values (that is, no duplicates allowed). Lazy FlatMap for sequence of optionals [SE-0008] Arrays, sets, and dictionaries are implemented as generic types in Swift. They each implement the new Collection protocol, which implements the Sequence protocol. Along this path from top-level type to Sequence protocol, you will find various other protocols that are also implemented in this inheritance chain. For our discussion on flatMap and lazy flatMap changes, I want to focus in on Sequences. Sequences contain a group of values that allow the user to visit each value one at a time. In Swift, you might consider using a for-in loop to iterate through your collection. The Sequence protocol provides implementations of many operations that you might want to perform on a list using sequential access, all of which you could override when you adopt the protocol in your custom collections. One such operation is the flatMap function, which returns an array containing the flattened (or rather concatenated) values, resulting from a transforming operation applied to each element of the sequence. let scores = [0, 5, 6, 8, 9] .flatMap{ [$0, $0 * 2] } print(scores) // [0, 0, 5, 10, 6, 12, 8, 16, 9, 18]   In our preceding example, we take a list of scores and call flatMap with our transforming closure. Each value is converted into a sequence containing the original value and a doubled value. Once the transforming operations are complete, the flatMap method flattens the intermediate sequences into a single sequence. We can also use the flatMap method with Sequences that contain optional values to accomplish a similar outcome. This time we are omitting values from the sequence we flatten by return nil on the transformation. let oddSquared = [1, 2, 3, 4, 5, 10].flatMap { n in n % 2 == 1 ? n*n : nil } print(oddSquared) // [1, 9, 25] The previous two examples were fairly basic transformations on small sets of values. In a more complex situation, the collections that you need to work with might be very large with expensive transformation operations. Under those parameters, you would not want to perform the flatMap operation or any other costly operation until it was absolutely needed. Luckily, in Swift, we have lazy operations for this very use case. Sequences contain a lazy property that returns a LazySequence that can perform lazy operations on Sequence methods. Using our first example, we can obtain a lazy sequence and call flatMap to get a lazy implementation. Only in the lazy scenario, the operation isn’t completed until scores is used sometime later in code. let scores = [0, 5, 6, 8, 9] .lazy .flatMap{ [$0, $0 * 2] } // lazy assignment has not executed for score in scores{ print(score) } The lazy operation works, as we would expect in our preceding test. However, when we use the lazy form of flatMap with our second example that contains optionals, our flatMap executes immediately in Swift 2. While we expected oddSquared variable to hold a ready to run flatMap, delayed until we need it, we instead received an implementation that was identical to the non-lazy version. let oddSquared = [1, 2, 3, 4, 5, 10] .lazy // lazy assignment but has not executed .flatMap { n in n % 2 == 1 ? n*n : nil } for odd in oddSquared{ print(odd) } Essentially, this was a bug in Swift that has been fixed in Swift 3. You can read the proposal at the following link https://github.com/apple/swift-evolution/blob/master/proposals/0008-lazy-flatmap-for-optionals.md Adding the first(where:) method to sequence A common task for working with collections is to find the first element that matches a condition. An example would be to ask for the first student in an array of students whose test scores contain a 100. You can accomplish this using a predicate to return the filtered sequence that matched the criteria and then just give back the first student in the sequence. However, it would be much easier to just call a single method that could return the item without the two-step approach. This functionality was missing in Swift 2, but was voted in by the community and has been added for this release. In Swift 3, there is a now an extension method on the Sequence protocol to implement first(where:). ["Jack", "Roger", "Rachel", "Joey"].first { (name) -> Bool in name.contains("Ro") } // =>returns Roger This first(where:) extension is a nice addition to the language because it ensures that a simple and common task is actually easy to perform in Swift. You can read the proposal at the following link https://github.com/apple/swift-evolution/blob/master/proposals/0032-sequencetype-find.md. Add sequence(first: next:) and sequence(state: next:) public func sequence<T>(first: T, next: @escaping (T) -> T?) -> UnfoldSequence<T, (T?, Bool)> public func sequence<T, State>(state: State, next: @escaping (inout State) -> T?) -> UnfoldSequence<T, State> public struct UnfoldSequence<Element, State> : Sequence, IteratorProtocol These two functions were added as replacements to the C-style for loops that were removed in Swift 3 and to serve as a compliment to the global reduce function that already exists in Swift 2. What’s interesting about the additions is that each function has the capability of generating and working with infinite sized sequences. Let’s examine the first sequence function to get a better understanding of how it works. /// - Parameter first: The first element to be returned from the sequence. /// - Parameter next: A closure that accepts the previous sequence element and /// returns the next element. /// - Returns: A sequence that starts with `first` and continues with every /// value returned by passing the previous element to `next`. /// func sequence<T>(first: T, next: @escaping (T) -> T?) -> UnfoldSequence<T, (T?, Bool)> The first sequence method returns a sequence that is created from repeated invocations of the next parameter, which holds a closure that will be lazily executed. The return value is an UnfoldSequence that contains the first parameter passed to the sequence method plus the result of applying the next closure on the previous value. The sequence is finite if next eventually returns nil and is infinite if next never returns nil. let mysequence = sequence(first: 1.1) { $0 < 2 ? $0 + 0.1 : nil } for x in mysequence{ print (x) } // 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 In the preceding example, we create and assign our sequence using the trailing closure form of sequence(first: next:). Our finite sequence will begin with 1.1 and will call next repeatedly until our next result is greater than 2 at which case next will return nil. We could easily convert this to an infinite sequence by removing our condition that our previous value must not be greater than 2. /// - Parameter state: The initial state that will be passed to the closure. /// - Parameter next: A closure that accepts an `inout` state and returns the /// next element of the sequence. /// - Returns: A sequence that yields each successive value from `next`. /// public func sequence<T, State>(state: State, next: (inout State) -> T?) -> UnfoldSequence<T, State> The second sequence function maintains mutable state that is passed to all lazy calls of next to create and return a sequence. This version of the sequence function uses a passed in closure that allows you to update the mutable state each time the next called. As was the case with our first sequence function, a finite sequence ends when next returns a nil. You can turn an finite sequence into an infinite one by never returning nil when next is called. Let’s create an example of how this version of the sequence method might be used. Traversing a hierarchy of views with nested views or any list of nested types is a perfect task for using the second version of the sequence function. Let’s create a an Item class that has two properties. A name property and an optional parent property to keep track of the item’s owner. The ultimate owner will not have a parent, meaning the parent property will be nil. class Item{ var parent: Item? var name: String = "" } Next, we create a parent and two nested children items. Child1 parent will be the parent item and child2 parent will be child1. let parent = Item() parent.name = "parent" let child1 = Item() child1.name = "child1" child1.parent = parent let child2 = Item() child2.name = "child2" child2.parent = child1 Now, it’s time to create our sequence. The sequence needs two parameters from us: a state parameter and a next closure. I made the state an Item with an initial value of child2. The reason for this is because I want to start at the lowest leaf of my tree and traverse to the ultimate parent. Our example only has three levels, but you could have lots of levels in a more complex example. As for the next parameter, I’m using a closure expression that expects a mutable Item as its state. My closure will also return an optional Item. In the body of our closure, I use our current Item (mutable state parameter) to access the item’s parent. I also updated the state and return the parent. let itemSeq = sequence(state: child2, next: { (next: inout Item)->Item? in let parent = next.parent next = parent != nil ? parent! : next return parent }) for item in itemSeq{ print("name: (item.name)") } There are some gotchas here that I want to address so that you will better understand how to define your own next closure for this sequence method. The state parameter could really be anything you want it to be. It’s for your benefit in helping you determine the next element of the sequence and to give you relevant information about where you are in the sequence. One idea to improve our example above would be to track how many levels of nesting we have. We could have made our state a tuple that contained an integer counter for the nesting level along with the current item. The next closure needs to be expanded to show the signature. Because of Swift’s expressiveness and conciseness, when it comes to closures, you might be tempted to convert the next closure into a shorter form and omit the signature. Do not do this unless your next closure is extremely simple and you are positive that the compiler will be able to infer your types. Your code will be harder to maintain when you use the short closure format, and you won’t get extra points for style when someone else inherits it. Don’t forget to update your state parameter in the body of your closure. This really is your best chance to know where you are in your sequence. Forgetting to update the state will probably cause you to get unexpected results when you try to step through your sequence. Make a clear decision ahead of the time about whether you are creating a finite or infinite sequence. This decision is evident in how you return from your next closure. An infinite sequence is not bad to have when you are expecting it. However, if you iterate over this sequence using a for-in loop, you could get more than you bargained for, provided you were assuming this loop would end. A new Model for Collections and Indices [SE-0065]Swift 3 introduces a new model for collections that moves the responsibility of the index traversal from the index to the collection itself. To make this a reality for collections, the Swift team introduced four areas of change: The Index property of a collection can be any type that implements the Comparable protocol Swift removes any distinction between intervals and ranges, leaving just ranges Private index traversal methods are now public Changes to ranges make closed ranges work without the potential for errors You can read the proposal at the following link https://github.com/apple/swift-evolution/blob/master/proposals/0065-collections-move-indices.md. Introducing the collection protocol In Swift 3, Foundation collection types such as Arrays, Sets, and Dictionaries are generic types that implement the newly created Collection protocol. This change was needed in order to support traversal on the collection. If you want to create custom collections of your own, you will need to understand the Collection protocol and where it lives in the collection protocol hierarchy. We are going to cover the important aspects to the new collection model to ease you transition and to get you ready to create custom collection types of your own. The Collection protocol builds on the Sequence protocol to provide methods for accessing specific elements when using a collection. For example, you can use a collection’s index(_:offsetBy:) method to return an index that is a specified distance away from the reference index. let numbers = [10, 20, 30, 40, 50, 60] let twoAheadIndex = numbers.index(numbers.startIndex, offsetBy: 2) print(numbers[twoAheadIndex]) //=> 30 In the preceding example, we create the twoAheadIndex constant to hold the position in our numbers collection that is two positions away from our starting index. We simply use this index to retrieve the value from our collection using subscript notation. Conforming to the Collection Protocol If you would like to create your own custom collections, you need to adopt the Collection protocol by declaring startIndex and endIndex properties, a subscript to support access to your elements, and the index(after: ) method to facilitate traversing your collection’s indices. When we are migrating existing types over to Swift 3, the migrator has some known issues with converting custom collections. It’s likely that you can easily resolve the compiler issues by checking the imported types for conformance to the Collection protocol. Additionally, you need to conform to the Sequence and IndexableBase protocols as the Collection protocol adopts them both. public protocol Collection : Indexable, Sequence { … } A simple custom collection could look like the following example. Note that I have defined my Index type to be an Int. In Swift 3, you define the Index to be any type that implements the Comparable protocol. struct MyCollection<T>: Collection{ typealias Index = Int var startIndex: Index var endIndex: Index var _collection: [T] subscript(position: Index) -> T{ return _collection[position] } func index(after i: Index) -> Index { return i + 1 } init(){ startIndex = 0 endIndex = 0 _collection = [] } mutating func add(item: T){ _collection.append(item) } } var myCollection: MyCollection<String> = MyCollection() myCollection.add(item: "Harry") myCollection.add(item: "William") myCollection[0] The Collection protocol has default implementations for most of its methods, the Sequence protocols methods, and the IndexableBase protocols methods. This means you are only required to provide a few things of your own. You can, however, implement as many of the other methods as make sense for your collection. New Range and Associated Indices Types Swift 2’s Range<T>, ClosedInterval<T>, and OpenInterval<T> are going away in Swift 3. These types are being replaced with four new types. Two of the new range types support general ranges with bounds that implement the Comparable protocol: Range<T> and ClosedRange<T>. The other two range types conform to RandomAccessCollection. These types support ranges whose bounds implement the Strideable protocol. Last, ranges are no longer iterable since ranges are now represented as a pair of indices. To keep legacy code working, the Swift team introduced an associated indices type, which is iterable. In addition, three generic types were created to provide a default indices type for each type of collection traversal category. The generics are DefaultIndices<C>, DefaultBidirectionalIndices<C>, and DefaultRandomAccessIndices<C>; each stores its underlying collection for traversal. Quick Takeaways I covered a lot of stuff in a just a few pages on collection types in Swift 3. Here are the highlights to keep in mind about the collections and indices. Collections types (built-in and custom) implement the Collection protocol. Iterating over collections has moved to the collection—the index no longer has that ability. You can create your own collections by adopting the Collection protocol. You need to implement: startIndex and endIndex properties The subscript method to support access to your elements The index(after: ) method to facilitate traversing your collection’s indices Closure changes for Swift 3 A closure in Swift is a block of code that can be used in a function call as a parameter or assigned to a variable to execute their functionality at a later time. Closures are a core feature to Swift and are familiar to developers that are new to Swift as they remind you of lambda functions in other programming languages. For Swift 3, there were two notable changes that I will highlight in this section. The first change deals with inout captures. The second is a change that makes non-escaping closures the default. Limiting inout Capture of @noescape Closures]In Swift 2, capturing inout parameters in an escaping closure was difficult for developers to understand. Closures are used everywhere in Swift especially in the standard library and with collections. Some closures are assigned to variables and then passed to functions as arguments. If the function that contains the closure parameter returns from its call and the passed in closure is used later, then you have an escaping closure. On the other hand, if the closure is only used within the function to which it is passed and not used later, then you have a nonescaping closure. The distinction is important here because of the mutating nature of inout parameters. When we pass an inout parameter to a closure, there is a possibility that we will not get the result we expect due to how the inout parameter is stored. The inout parameter is captured as a shadow copy and is only written back to the original if the value changes. This works fine most of the time. However, when the closure is called at a later time (that is, when it escapes), we don’t get the result we expect. Our shadow copy can’t write back to the original. Let’s look at an example. var seed = 10 let simpleAdderClosure = { (inout seed: Int)->Int in seed += 1 return seed * 10 } var result = simpleAdderClosure(&seed) //=> 110 print(seed) // => 11 In the preceding example, we get what we expect. We created a closure to increment our passed in inout parameter and then return the new parameter multiplied by 10. When we check the value of seed after the closure is called, we see that the value has increased to 11. In our second example, we modify our closure to return a function instead of just an Int value. We move our logic to the closure that we are defining as our return value. let modifiedClosure = { (inout seed: Int)-> (Int)->Int in return { (Int)-> Int in seed += 1 return seed * 10 } } print(seed) //=> 11 var resultFn = modifiedClosure(&seed) var result = resultFn(1) print(seed) // => 11 This time when we execute the modifiedClosure with our seed value, we get a function as the result. After executing this intermediate function, we check our seed value and see that the value is unchanged; even though, we are still incrementing the seed value. These two slight differences in syntax when using inout parameters generate different results. Without knowledge of how shadow copy works, it would be hard understand the difference in results. Ultimately, this is just another situation where you receive more harm than good by allowing this feature to remain in the language. You can read the proposal at the following link https://github.com/apple/swift-evolution/blob/master/proposals/0035-limit-inout-capture.md. Resolution In Swift 3, the compiler now limits inout parameter usage with closures to non-escaping (@noescape). You will receive an error if the compiler detects that your closure escapes when it contains inout parameters. Making non-escaping closures the default [SE-0103] You can read the proposal at https://github.com/apple/swift-evolution/blob/master/proposals/0103-make-noescape-default.md. In previous versions of Swift, the default behavior of function parameters whose type was a closure was to allow escaping. This made sense as most of the Objective-C blocks (closures in Swift) imported into Swift were escaping. The delegation pattern in Objective-C, as implemented as blocks, was composed of delegate blocks that escaped. So, why would the Swift team want to change the default to non-escaping as the default? The Swift team believes you can write better functional algorithms with non-escaping closures. An additional supporting factor is the change to require non-escaping closures when using inout parameters with the closure [SE-0035]. All things considered, this change will likely have little impact on your code. When the compiler detects that you are attempting to create an escaping closure, you will get an error warning that you are possibly creating an escaping closure. You can easily correct the error by adding @escaping or via the fixit that accompanies the error. In Swift 2.2: var callbacks:[String : ()->String] = [:] func myEscapingFunction(name:String, callback:()->String){ callbacks[name] = callback } myEscapingFunction("cb1", callback: {"just another cb"}) for cb in callbacks{ print("name: (cb.0) value: (cb.1())") } In Swift 3: var callbacks:[String : ()->String] = [:] func myEscapingFunction(name:String, callback: @escaping ()->String){ callbacks[name] = callback } myEscapingFunction(name:"cb1", callback: {"just another cb"}) for cb in callbacks{ print("name: (cb.0) value: (cb.1())") } Summary In this article, we covered changes to collections and closures. You learned about the new Collection protocol that forms the base of the new collection model and how to adopt the protocol in our own custom collections. The new collection model made a significant change in moving collection traversal from the index to the collection itself. The new collection model changes are necessary in order to support Objective-C interactivity and to provide a mechanism to iterate over the collections items using the collection itself. As for closures, we also explored the motivation for the language moving to non-escaping closures as the default. You also learned how to properly use inout parameters with closures in Swift 3. Resources for Article: Further resources on this subject: Introducing the Swift Programming Language [article] Concurrency and Parallelism with Swift 2 [article] Exploring Swift [article]
Read more
  • 0
  • 0
  • 14739

article-image-fake-swift-enums-user-friendly-frameworks
Daniel Leping
28 Sep 2016
7 min read
Save for later

Fake Swift Enums and User-Friendly Frameworks

Daniel Leping
28 Sep 2016
7 min read
Swift Enums are a great and useful hybrid of classic enums with tuples. One of the most attractive points of consideration here is their short .something API when you want to pass one as an argument to a function. Still, they are quite limited in certain situations. OK, enough about enums themselves, because this post is about fake enums (don't take my word literally here). Why? Everybody is talking about usability, UX, and how to make the app user-friendly. This all makes sense, because at the end of the day we all are humans and perceive things with feelings. So the app just leaves the impression footprint. There are a lot of users who would prefer a more friendly app to the one providing a richer functionality. All these talks are proven by Apple and their products. But what about a developer, and what tools and bricks are we using? Right now I'm not talking about the IDEs, but rather about the frameworks and the APIs. Being an open source framework developer myself (right now I'm working on Swift Express, a web application framework in Swift, and the foundation around it), I'm concerned about the looks of the APIs we are providing. It matches one-to-one to the looks of the app, so it has the same importance. Call it Framework's UX if you would like. If the developer is pleased with the API the framework provides, he is 90% already your user. This post was particularly inspired by the Event sub-micro-framework, a part of the Reactive Swift foundation I'm creating right now to make Express solid. We took the basic idea from node.js EvenEmitter, which is very easy to use in my opinion. Though, instead of using the String Event ID approach provided by node, we wanted to use the .something approach (read above about what I think of nice APIs) and we are hoping that enums would work great, we encountered limitations with it. The hardest thing was to create a possibility to use different arguments for closures of different event types. It's all very simple with dynamic languages like JavaScript, but well, here we are type-safe. You know... The problem Usually, when creating a new framework, I try to first write an ideal API, or the one I would like to see at the very end. And here is what I had: eventEmitter.on(.stringevent) { string in print("string:", string) } eventEmitter.on(.intevent) { i in print("int:", i) } eventEmitter.on(.complexevent) { (s, i) in print("complex: string:", s, "int:", i) } This seems easy enough for the final user. What I like the most here is that different EventEmitters can have different sets of events with specific data payloads and still provide the .something enum style notation. This is easier to say than to do. With enums, you cannot have an associated type bound to a specific case. It must be bound to the entire enum, or nothing. So there is no possibility to provide a specific data payload for a particular case. But I was very clear that I want everything type-safe, so there can be no dynamic arguments. The research First of all, I began investigating if there is a possibility to use the .something notation without using enums. The first thing I recalled was the OptionSetType that is mainly used to combine the flags for C APIs. And it allows the .somthing notation. You might want to investigate this protocol as it's just cool and useful in many situations where enums are not enough. After a bit of experimenting, I found out that any class or struct having a static member of Self type can mimic an enum. Pretty much like this: struct TestFakeEnum { private init() { } static let one:TestFakeEnum = TestFakeEnum() static let two:TestFakeEnum = TestFakeEnum() static let three:TestFakeEnum = TestFakeEnum() } func funWithEnum(arg:TestFakeEnum) { } func testFakeEnum() { funWithEnum(.one) funWithEnum(.two) funWithEnum(.three) } This code will compile and run correctly. These are the basics of any fake enum. Even though the example above does not provide any particular benefit over built-in enums, it demonstrates the fundamental possibility. Getting generic Let's move on. First of all, to make our events have a specific data payload, we've added an EventProtocol (just keep in mind; it will be important later): //do not pay attention to Hashable, it's used for internal event routing mechanism. Not a subject here public protocol EventProtocol : Hashable { associatedtype Payload } To make our emitter even better we should not limit it to a restricted set of events, but rather allow the user to extend it. To achieve this, I've added a notion of EventGroup. It's not a particular type but rather an informal concept, so every event group should follow. Here is an example of an EventGroup: struct TestEventGroup<E : EventProtocol> { internal let event:E private init(_ event:E) { self.event = event } static var string:TestEventGroup<TestEventString> { return TestEventGroup<TestEventString>(.event) } static var int:TestEventGroup<TestEventInt> { return TestEventGroup<TestEventInt>(.event) } static var complex:TestEventGroup<TestEventComplex> { return TestEventGroup<TestEventComplex>(.event) } } Here is what TestEventString, TestEventInt and TestEventComplex are (real enums are used here only to have conformance with Hashable and to be a case singleton, so don't bother): //Notice, that every event here has its own Payload type enum TestEventString : EventProtocol { typealias Payload = String case event } enum TestEventInt : EventProtocol { typealias Payload = Int case event } enum TestEventComplex : EventProtocol { typealias Payload = (String, Int) case event } So to get a generic with .something notation, you have to create a generic class or struct having static members of the owner type with a particular generic param applied. Now, how can you use it? How can you discover what generic type is associated with a specific option? For that, I used the following generic function: // notice, that Payload is type-safety extracted from the associated event here with E.Payload func on<E : EventProtocol>(groupedEvent: TestEventGroup<E>, handler:E.Payload->Void) -> Listener { //implementation here is not of the subject of the article } Does this thing work? Yes. You can use the API exactly like it was outlined in the very beginning of this post: let eventEmitter = EventEmitterTest() eventEmitter.on(.string) { s in print("string:", s) } eventEmitter.on(.int) { i in print("int:", i) } eventEmitter.on(.complex) { (s, i) in print("complex: string:", s, "int:", i) } All the type inferences work. It's type safe. It's user friendly. And it is all thanks to a possibility to associate the type with the .something enum-like member. Conclusion Pity that this functionality is not available out of the box with built-in enums. For all the experiments to make this happen, I had to spend several hours. Maybe in one of the upcoming versions of Swift (3? 4.0?), Apple will let us get the type of associated values of an enum or something. But… okay. Those are dreams and out of the scope of this post. For now, we have what we have, and I'm really glad that are abele to have an associatedtype with enum-like entity, even if it's not straightforward. The examples were taken from the Event project. The complete code can be found here and it was tested with Swift 2.2 (Xcode 7.3), which is the latest at the time of writing. Thanks for reading. Use user-friendly frameworks only and enjoy your day! About the Author Daniel Leping is the CEO of Crossroad Labs. He has been working with Swift since the early beta releases and continues to do so at the Swift Express project. His main interests are reactive and functional programming with Swift, Swift-based web technologies, and bringing the best of modern techniques to the Swift world. He can be found on GitHub.
Read more
  • 0
  • 0
  • 22708
article-image-modern-natural-language-processing-part-1
Brian McMahan
28 Sep 2016
10 min read
Save for later

Modern Natural Language Processing – Part 1

Brian McMahan
28 Sep 2016
10 min read
In this three-part blog post series, I will be covering the basics of language modeling. The goal of language modeling is to capture the variability in observed linguistic data. In the simplest form, this is a matter of predicting the next word given all previous words. I am going to adopt this simple viewpoint to make explaining the basics of language modeling experiments clearer. In this series I am going to first introduce the basics of data munging—converting raw data into a processed form amenable for machine learning tasks. Then, I will cover the basics of prepping the data for a learning algorithm, including constructing a customized embedding matrix from the current state of the art embeddings (and if you don't know what embeddings are, I will cover that too). I will be going over a useful way of structuring the various components--data manager, training model, driver, and utilities—that simultaneously allows for fast implementation and flexibility for future modifications to the experiment. And finally I will cover an instance of a training model, showing how it connects up to the infrastructure outlined here, then consequently trained on the data, evaluated for performance, and used for tasks like sampling sentences. At its core, though, predicting the answer at time t+1 given time t revolves distinguishing two things: the underlying signal and the noise that makes the observed data deviate from that underlying signal. In language data, the underlying signal is intentional meaning and the noise is the many different ways people can say what they mean and the many different contexts those meanings can be embedded it. Again, I am going to simplify everything and assume a basic hypothesis: The signal can be inferred by looking at the local history and the noise can be captured as a probability distribution over potential vocabulary items. This is the central theme of the experiment I describe in this blog post series. Despite its simplicity, though, there are many bumps in the road you can hit. My intended goal with this post is to outline the basics of rolling your own infrastructure so you can deal with these bumps. Additionally, I will try to convey the various decision points and the things you should pay attention to along the way. Opinionated Implementation Philosophy Learning experiments in general are greatly facilitated by having infrastructure that allows you to experiment with different parameters, different data, different models, and any other variation that may arise. In the same vein, it's best to just get something working before you try to optimize the infrastructure for flexibility. The following division of labor is a nice middle ground I've found that allows for modularity and sanitation while being fast to implement: driver.py: Entry point for your experiment and any command line interface you'd like to have with it. igor.py: Loads parameters form config file, loads data from disk, serves data to model, and acts as interface for model. model.py: implements the training model and expects igor to have everything. utils.py: storage for all messy functions. I will discuss these implementations below. Requisite software The following packages are either highly recommended or required (in addition to their prerequisite packages): keras, spacy, sqlitedict, theano, yaml, tqdm, numpy Preprocessing Raw Data Finding data is not a difficult task, but converting it to the form you need it in to train models and run experiments can be tedious. In this section, I will cover how to go from a raw text dataset to a form which is more amenable to language modeling. There are many datasets that have already done this for you and there are many datasets that are unbelievably messy. I will work with something a bit in the middle: a dataset of president speeches that is available as raw text. ### utils.py from keras.utils.data_utils import get_file path = get_file('speech.json', origin='https://github.com/Vocativ-data/presidents_readability/raw/master/The%20original%20speeches.json') withopen(path) as fp: raw_data = json.load(fp) print("This is one of the speeches by {}:".format(raw_data['objects'][0]['President'])) print(raw_data['objects'][0]['Text']) This dataset is a JSON file that contains about 600 presidential speeches. To usefully process text data, we have to get it into the bite-sized chunks so you can identify what things are. Our goal for preprocessing is to get the individual words so that we can map them to integers and use them in machine learning algorithms. I will be using the spacy library, a state-of-the-art NLP library for Python and mainly implemented in Cython. The following code will take one of the speeches and tokenize it for us. It does a lot of other things as well, but I won't be covering those in this post. Instead, I'll just be using the library for its tokenizing ability. from spacy.en import English nlp = English(parser=True) ### nlp is a callable object---it implements most of spacy's API data1 = nlp(raw_data['objects'][0]['Text']) The variable data1 stores the speech in a format that allows for spacy to do a bunch of things. We just want the tokens, so let's pull them out: data2 = map(list, data1.sents) The code is a bit dense, but data1.sents is an iterator over the sentences in data1. Applying the list function over the iterator creates a list of the sentences, each with a list of words. Splitting the data We will also be splitting our data into train, test, and dev sets. The end goal is to have a model that generalizes. So, we train on our training data and evaluate on held-out data to see how well we generalize. However, there are hyper parameters, such as learning rate and model size, which affect how well the model does. If we were to evaluate on one set of held-out data, we would begin to select hyper parameters to fit that data, thus leaving us with no way of knowing how well our model does in practice. The test data is for this purpose. It's to give you a measuring stick into how well your model would do. You should never make modeling choices informed by the performance on the test data. ### utils.py def process_raw_data(raw_data): flat_raw = [datum['Text'] for datum in raw_data['objects']] nb_data = 10### we are only going to a test dataset len(flat_raw) all_indices = np.random.choice(np.arange(nb_data), nb_data, replace=False) train_portion, dev_portion = 0.7, 0.2 num_train, num_dev = int(train_portion*nb_data), int(dev_portion*nb_data) train_indices = all_indices[:num_train] dev_indices = all_indices[num_train:num_train+num_dev] test_indices = all_indices[num_train+num_dev:] nlp = English(parser=True) raw_train_data = [nlp(flat_raw[i]).sents for i in train_indices] raw_train_data = [[str(word).strip() for word in sentence] for speech in raw_train_data for sentence in speech] raw_dev_data = [nlp(flat_raw[i]).sents for i in dev_indices] raw_dev_data = [[str(word).strip() for word in sentence] for speech in raw_dev_data for sentence in speech] raw_test_data = [nlp(flat_raw[i]).sents for i in test_indices] raw_test_data = [[str(word).strip() for word in sentence] for speech in raw_train_data for sentence in speech] return (raw_train_data, raw_dev_data, raw_test_data), (train_indices, dev_indices, test_indices) Vocabularies In order for text data to interface with numeric algorithms, they have to be converted to unique tokens. There are many different ways of doing this, including using spacy, but we will be making our own using a Vocabulary class. Note that there is a mask being added upon initialization. It is useful to reserve the 0 integer for a token that will never appear in your data. This allows for machine learning algorithms to be clever about variable sized inputs (such as sentences). ### utils.py from collections import defaultdict class Vocabulary(object): def__init__(self, frozen=False): self.word2id = defaultdict(lambda: len(self.word2id)) self.id2word = {} self.frozen = False ### add mask self.mask_token = "<mask>" self.unk_token = "<unk>" self.mask_id = self.add(self.mask_token) self.unk_id = self.add(self.unk_token) @classmethod def from_data(cls, data, return_converted=False): this = cls() out = [] for datum in data: added = list(this.add_many(datum)) if return_converted: out.append(added) if return_converted: return this, out else: return this def convert(self, data): out = [] for datum in data: out.append(list(self.add_many(datum))) return out def add_many(self, tokens): for token in tokens: yieldself.add(token) def add(self, token): token = str(token).strip() ifself.frozen and token not in self.word2id: token = self.unk_token _id = self.word2id[str(token).strip()] self.id2word[_id] = token return _id def__getitem__(self, k): returnself.add(k) def__setitem__(self, *args): import warnings warnings.warn("not an available function") def keys(self): returnself.word2id.keys() def items(self): returnself.word2id.items() def freeze(self): self.add(self.unk_token) # just in case self.frozen = True def__contains__(self, token): return token in self.word2id def__len__(self): returnlen(self.word2id) @classmethod def load(cls, filepath): new_vocab = cls() withopen(filepath) as fp: in_data = pickle.load(fp) new_vocab.word2id.update(in_data['word2id']) new_vocab.id2word.update(in_data['id2word']) new_vocab.frozen = in_data['frozen'] return new_vocab def save(self, filepath): withopen(filepath, 'w') as fp: pickle.dump({'word2id': dict(self.word2id), 'id2word': self.id2word, 'frozen': self.frozen}, fp) The benefit of making your own Vocabulary class is that you get to have a fine-grained control over how it behaves and you get to intuitively understand how it's running. When making your vocabulary, it's vital that you don't use words that were in your dev and test sets but not in your training set. This is because you technically don't have any evidence for them, and to use them would be to put yourself as a generalization disadvantage. In other words, you wouldn't be able to trust your model's performance as much. So, we will only make our vocabulary out of training data, then freeze it, then use it to convert the rest of the data. The frozen implementation in the Vocabulary class is not as good as it could be, but it's the most illustrative. ### utils.py def format_data(raw_train_data, raw_dev_data, raw_test_data): vocab, train_data = Vocabulary.from_data(raw_train_data, True) vocab.freeze() dev_data = vocab.convert(raw_dev_data) test_data = vocab.convert(raw_test_data) return (train_data, dev_data, test_data), vocab Continue on in Part 2 to learn about igor, embeddings, serving data, and different sized sentences and masking. About the author Brian McMahan is in his final year of graduate school at Rutgers University, completing a PhD in computer science and an MS in cognitive psychology. He holds a BS in cognitive science from Minnesota State University, Mankato. At Rutgers, Brian investigates how natural language and computer vision can be brought closer together with the aim of developing interactive machines that can coordinate in the real world. His research uses machine learning models to derive flexible semantic representations of open-ended perceptual language.
Read more
  • 0
  • 0
  • 1260

article-image-deep-learning-image-generation-getting-started-generative-adversarial-networks
Mohammad Pezeshki
27 Sep 2016
5 min read
Save for later

Deep Learning and Image generation: Get Started with Generative Adversarial Networks

Mohammad Pezeshki
27 Sep 2016
5 min read
In machine learning, a generative model is one that captures the observable data distribution. The objective of deep neural generative models is to disentangle different factors of variation in data and be able to generate new or similar-looking samples of the data. For example, an ideal generative model on face images disentangles all different factors of variation such as illumination, pose, gender, skin color, and so on, and is also able to generate a new face by the combination of those factors in a very non-linear way. Figure 1 shows a trained generative model that has learned different factors, including pose and the degree of smiling. On the x-axis, as we go to the right, the pose changes and on y-axis as we move upwards, smiles turn to frowns. Usually these factors are orthogonal to one another, meaning that changing one while keeping the others fixed leads to a single change in data space; e.g. in the first row of Figure 1, only the pose changes with no change in the degree of smiling. The figure is adapted from here.   Based on the assumption that these underlying factors of variation have a very simple distribution (unlike the data itself), to generate a new face, we can simply sample a random number from the assumed simple distribution (such as a Gaussian). In other words, if there are k different factors, we randomly sample from a k-dimensional Gaussian distribution (aka noise). In this post, we will take a look at one of the recent models in the area of deep learning and generative models, called generative adversarial network. This model can be seen as a game between two agents: the Generator and the Discriminator. The generator generates images from noise and the discriminator discriminates between real images and those images which are generated by the generator. The objective is then to train the model such that while the discriminator tries to distinguish generated images from real images, the generator tries to fool the discriminator.  To train the model, we need to define a cost. In the case of GAN, the errors made by the discriminator are considered as the cost. Consequently, the objective of the discriminator is to minimize the cost, while the objective for the generator is to fool the discriminator by maximizing the cost. A graphical illustration of the model is shown in Figure 2.   Formally, we define the discriminator as a binary classiffier D : Rm ! f0; 1g and the generator as the mapping G : Rk ! Rm in which k is the dimension of the latent space that represents all of the factors of variation. Denoting the data by x and a point in latent space by z, the model can be trained by playing the following minmax game:   Note that the rst term encourages the discriminator to discriminate between generated images and real ones, while the second term encourages the generator to come up with images that would fool the discriminator. In practice, the log in the second term can be saturated, which would hurt the row of the gradient. As a result, the cost may be reformulated equivalently as:   At the time of generation, we can sample from a simple k-dimensional Gaussian distribution with zero mean and unit variance and pass it onto the generator. Among different models that can be used as the discriminator and generator, we use deep neural networks with parameters D and G for the discriminator and generator, respectively. Since the training boils down to updating the parameters using the backpropagation algorithm, the update rule is defined as follows: If we use a convolutional network as the discriminator and another convolutional network with fractionally strided convolution layers as the generator, the model is called DCGAN (Deep Convolutional Generative Adversarial Network). Some samples of bedroom im-age generation from this model are shown in Figure 3.   The generator can also be a sequential model, meaning that it can generate an image using a sequence of images with lower-resolution or details. A few examples of the generated images using such a model are shown in Figure 4. The GAN and later variants such as the DCGAN are currently considered to be among the best when it comes to the quality of the generated samples. The images look so realistic that you might assume that the model has simply memorized instances of the training set, but a quick KNN search reveals this not to be the case. About the author Mohammad Pezeshk is a master’s student in the LISA lab of Universite de Montreal working under the supervision of Yoshua Bengio and Aaron Courville. He obtained his bachelor's in computer engineering from Amirkabir University of Technology (Tehran Polytechnic) in July 2014 and then started his master’s in September 2014. His research interests lie in the fields of artificial intelligence, machine learning, probabilistic models and specifically deep learning.
Read more
  • 0
  • 0
  • 4389
Modal Close icon
Modal Close icon