Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
Packt
12 Jan 2016
11 min read
Save for later

Façade Pattern – Being Adaptive with Façade

Packt
12 Jan 2016
11 min read
In this article by Chetan Giridhar, author of the book, Learning Python Design Patterns - Second Edition, we will get introduced to the Façade design pattern and how it is used in software application development. We will work with a sample use case and implement it in Python v3.5. In brief, we will cover the following topics in this article: An understanding of the Façade design pattern with a UML diagram A real-world use case with the Python v3.5 code implementation The Façade pattern and principle of least knowledge (For more resources related to this topic, see here.) Understanding the Façade design pattern Façade is generally referred to as the face of the building, especially an attractive one. It can be also referred to as a behavior or appearance that gives a false idea of someone's true feelings or situation. When people walk past a façade, they can appreciate the exterior face but aren't aware of the complexities of the structure within. This is how a façade pattern is used. Façade hides the complexities of the internal system and provides an interface to the client that can access the system in a very simplified way. Consider an example of a storekeeper. Now, when you, as a customer, visit a store to buy certain items, you're not aware of the layout of the store. You typically approach the storekeeper who is well aware of the store system. Based on your requirements, the storekeeper picks up items and hands them over to you. Isn't this easy? The customer need not know how the store looks and s/he gets the stuff done through a simple interface, the storekeeper. The Façade design pattern essentially does the following: It provides a unified interface to a set of interfaces in a subsystem and defines a high-level interface that helps the client use the subsystem in an easy way. Façade discusses representing a complex subsystem with a single interface object. It doesn't encapsulate the subsystem but actually combines the underlying subsystems. It promotes the decoupling of the implementation with multiple clients. A UML class diagram We will now discuss the Façade pattern with the help of the following UML diagram: As we observe the UML diagram, you'll realize that there are three main participants in this pattern: Façade: The main responsibility of a façade is to wrap up a complex group of subsystems so that it can provide a pleasing look to the outside world. System: This represents a set of varied subsystems that make the whole system compound and difficult to view or work with. Client: The client interacts with the Façade so that it can easily communicate with the subsystem and get the work completed. It doesn't have to bother about the complex nature of the system. You will now learn a little more about the three main participants from the data structure's perspective. Façade The following points will give us a better idea of Façade: It is an interface that knows which subsystems are responsible for a request It delegates the client's requests to the appropriate subsystem objects using composition For example, if the client is looking for some work to be accomplished, it need not have to go to individual subsystems but can simply contact the interface (Façade) that gets the work done System In the Façade world, System is an entity that performs the following: It implements subsystem functionality and is represented by a class. Ideally, a System is represented by a group of classes that are responsible for different operations. It handles the work assigned by the Façade object but has no knowledge of the façade and keeps no reference to it. For instance, when the client requests the Façade for a certain service, Façade chooses the right subsystem that delivers the service based on the type of service Client Here's how we can describe the client: The client is a class that instantiates the Façade It makes requests to the Façade to get the work done from the subsystems Implementing the Façade pattern in the real world To demonstrate the applications of the Façade pattern, let's take an example that we'd have experienced in our lifetime. Consider that you have a marriage in your family and you are in charge of all the arrangements. Whoa! That's a tough job on your hands. You have to book a hotel or place for marriage, talk to a caterer for food arrangements, organize a florist for all the decorations, and finally handle the musical arrangements expected for the event. In yesteryears, you'd have done all this by yourself, such as talking to the relevant folks, coordinating with them, negotiating on the pricing, but now life is simpler. You go and talk to an event manager who handles this for you. S/he will make sure that they talk to the individual service providers and get the best deal for you. From the Façade pattern perspective we will have the following three main participants: Client: It's you who need all the marriage preparations to be completed in time before the wedding. They should be top class and guests should love the celebrations. Façade: The event manager who's responsible for talking to all the folks that need to work on specific arrangements such as food, flower decorations, among others Subsystems: They represent the systems that provide services such as catering, hotel management, and flower decorations Let's develop an application in Python v3.5 and implement this use case. We start with the client first. It's you! Remember, you're the one who has been given the responsibility to make sure that the marriage preparations are done and the event goes fine! However, you're being clever here and passing on the responsibility to the event manager, isn't it? Let's now look at the You class. In this example, you create an object of the EventManager class so that the manager can work with the relevant folks on marriage preparations while you relax. class You(object):     def __init__(self):         print("You:: Whoa! Marriage Arrangements??!!!")     def askEventManager(self):         print("You:: Let's Contact the Event Manager\n\n")         em = EventManager()         em.arrange()     def __del__(self):         print("You:: Thanks to Event Manager, all preparations done! Phew!") Let's now move ahead and talk about the Façade class. As discussed earlier, the Façade class simplifies the interface for the client. In this case, EventManager acts as a façade and simplifies the work for You. Façade talks to the subsystems and does all the booking and preparations for the marriage on your behalf. Here is the Python code for the EventManager class: class EventManager(object):         def __init__(self):         print("Event Manager:: Let me talk to the folks\n")         def arrange(self):         self.hotelier = Hotelier()         self.hotelier.bookHotel()                 self.florist = Florist()         self.florist.setFlowerRequirements()                  self.caterer = Caterer()         self.caterer.setCuisine()                 self.musician = Musician()         self.musician.setMusicType() Now that we're done with the Façade and client, let's dive into the subsystems. We have developed the following classes for this scenario: Hotelier is for the hotel bookings. It has a method to check whether the hotel is free on that day (__isAvailable) and if it is free for booking the Hotel (bookHotel). The Florist class is responsible for flower decorations. Florist has the setFlowerRequirements() method to be used to set the expectations on the kind of flowers needed for the marriage decoration. The Caterer class is used to deal with the caterer and is responsible for the food arrangements. Caterer exposes the setCuisine() method to accept the type of cuisine to be served at the marriage. The Musician class is designed for musical arrangements at the marriage. It uses the setMusicType() method to understand the music requirements for the event. class Hotelier(object):     def __init__(self):         print("Arranging the Hotel for Marriage? --")         def __isAvailable(self):         print("Is the Hotel free for the event on given day?")         return True       def bookHotel(self):         if self.__isAvailable():             print("Registered the Booking\n\n")     class Florist(object):     def __init__(self):         print("Flower Decorations for the Event? --")         def setFlowerRequirements(self):         print("Carnations, Roses and Lilies would be used for Decorations\n\n")     class Caterer(object):     def __init__(self):         print("Food Arrangements for the Event --")         def setCuisine(self):         print("Chinese & Continental Cuisine to be served\n\n")     class Musician(object):     def __init__(self):         print("Musical Arrangements for the Marriage --")         def setMusicType(self):         print("Jazz and Classical will be played\n\n")   you = You() you.askEventManager() The output of the preceding code is given here: In the preceding code example: The EventManager class is the Façade that simplifies the interface for You EventManager uses composition to create objects of the subsystems such as Hotelier, Caterer, and others The principle of least knowledge As you have learned in the initial parts of this article, the Façade provides a unified system that makes subsystems easy to use. It also decouples the client from the subsystem of components. The design principle that is employed behind the Façade pattern is the principle of least knowledge. The principle of least knowledge guides us to reduce the interactions between objects to just a few friends that are close enough to you. In real terms, it means the following:: When designing a system, for every object created, one should look at the number of classes that it interacts with and the way in which the interaction happens. Following the principle, make sure that we avoid situations where there are many classes created tightly coupled to each other. If there are a lot of dependencies between classes, the system becomes hard to maintain. Any changes in one part of the system can lead to unintentional changes to other parts of the system, which means that the system is exposed to regressions and this should be avoided. Summary We began the article by first understanding the Façade design pattern and the context in which it's used. We understood the basis of Façade and how it is effectively used in software architecture. We looked at how Façade design patterns create a simplified interface for clients to use. It simplifies the complexity of subsystems so that the client benefits. The Façade doesn't encapsulate the subsystem and the client is free to access the subsystems even without going through the Façade. You also learned the pattern with a UML diagram and sample code implementation in Python v3.5. We understood the principle of least knowledge and how its philosophy governs the Façade design patterns. Further resources on this subject: Asynchronous Programming with Python [article] Optimization in Python [article] The Essentials of Working with Python Collections [article]
Read more
  • 0
  • 0
  • 14360

article-image-swift-power-and-performance
Packt
12 Oct 2015
14 min read
Save for later

Swift Power and Performance

Packt
12 Oct 2015
14 min read
In this article by Kostiantyn Koval, author of the book, Swift High Performance, we will learn about Swift, its performance and optimization, and how to achieve high performance. (For more resources related to this topic, see here.) Swift speed I could guess you are interested in Swift speed and are probably wondering "How fast can Swift be?" Before we even start learning Swift and discovering all the good things about it, let's answer this right here and right now. Let's take an array of 100,000 random numbers, sort in Swift, Objective-C, and C by using a standard sort function from stdlib (sort for Swift, qsort for C, and compare for Objective-C), and measure how much time would it take. In order to sort an array with 100,000 integer elements, the following are the timings: Swift 0.00600 sec C 0.01396 sec Objective-C 0.08705 sec The winner is Swift! Swift is 14.5 times faster that Objective-C and 2.3 times faster than C. In other examples and experiments, C is usually faster than Objective-C and Swift is way faster. Comparing the speed of functions You know how functions and methods are implemented and how they work. Let's compare the performance and speed of global functions and different method types. For our test, we will use a simple add function. Take a look at the following code snippet: func add(x: Int, y: Int) -> Int { return x + y } class NumOperation { func addI(x: Int, y: Int) -> Int class func addC(x: Int, y: Int) -> Int static func addS(x: Int, y: Int) -> Int } class BigNumOperation: NumOperation { override func addI(x: Int, y: Int) -> Int override class func addC(x: Int, y: Int) -> Int } For the measurement and code analysis, we use a simple loop in which we call those different methods: measure("addC") { var result = 0 for i in 0...2000000000 { result += NumOperation.addC(i, y: i + 1) // result += test different method } print(result) } Here are the results. All the methods perform exactly the same. Even more so, their assembly code looks exactly the same, except the name of the function call: Global function: add(10, y: 11) Static: NumOperation.addS(10, y: 11) Class: NumOperation.addC(10, y: 11) Static subclass: BigNumOperation.addS(10, y: 11) Overridden subclass: BigNumOperation.addC(10, y: 11) Even though the BigNumOperation addC class function overrides the NumOperation addC function when you call it directly, there is no need for a vtable lookup. The instance method call looks a bit different: Instance: let num = NumOperation() num.addI(10, y: 11) Subclass overridden instance: let bigNum = BigNumOperation() bigNum.addI() One difference is that we need to initialize a class and create an instance of the object. In our example, this is not so expensive an operation because we do it outside the loop and it takes place only once. The loop with the calling instance method looks exactly the same. As you can see, there is almost no difference in the global function and the static and class methods. The instance method looks a bit different but it doesn't have any major impact on performance. Also, even though it's true for simple use cases, there is a difference between them in more complex examples. Let's take a look at the following code snippet: let baseNumType = arc4random_uniform(2) == 1 ? BigNumOperation.self : NumOperation.self for i in 0...loopCount { result += baseNumType.addC(i, y: i + 1) } print(result) The only difference we incorporated here is that instead of specifying the NumOperation class type in compile time, we randomly returned it at runtime. And because of this, the Swift compiler doesn't know what method should be called at compile time—BigNumOperation.addC or NumOperation.addC. This small change has an impact on the generated assembly code and performance. A summary of the usage of functions and methods Global functions are the simplest and give the best performance. Too many global functions, however, make the code hard to read and reason. Static type methods, which can't be overridden have the same performance as global functions, but they also provide a namespace (its type name), so our code looks clearer and there is no adverse effect on performance. Class methods, which can be overridden could lead to a decrease in performance, and they should be used when you need class inheritance. In other cases, static methods are preferred. The instance method operates on the instance of the object. Use instance methods when you need to operate on the data of that instance. Make methods final when you don't need to override them. This gives an extra tip for the compiler for optimization, and performance could be increased because of it. Intelligent code Because Swift is a static and strongly typed language, it can read, understand, and optimize code very well. It tries to avoid the execution of all unnecessary code. For a better explanation, let's take a look at this simple example: class Object { func nothing() { } } let object = Object() object.nothing() object.nothing() We create an instance of the Object class and call a nothing method. The nothing method is empty, and calling it does nothing. The Swift compiler understands this and removes those method calls. After this, we have only one line of code: let object = Object() The Swift compiler can also remove the objects created that are never used. It reduces memory usage and unnecessary function calls, which also reduces CPU usage. In our example, the object instance is not used after removing the nothing method call and the creation of object can be removed as well. In this way, Swift removes all three lines of code and we end up with no code to execute at all. Objective-C, in comparison, can't do this optimization. Because it has a dynamic runtime, the nothing method's implementation can be changed to do some work at runtime. That's why Objective-C can't remove empty method calls. This optimization might not seem like a big win but let's take a look at another—a bit more complex—example that uses more memory: class Object { let x: Int let y: Int let z: Int init(x: Int) { self.x = x self.y = x * 2 self.z = y * 2 } func nothing() { } } We have added some Int data to our Object class to increase memory usage. Now, the Object instance would use at least 24 bytes (3 * int size; Int uses 4 bytes in the 64 bit architecture). Let's also try to increase the CPU usage by adding more instructions, using a loop: for i in 0...1_000_000 { let object = Object(x: i) object.nothing() object.nothing() } print("Done") Integer literals can use the underscore sign (_) to improve readability. So, 1_000_000_000 is the same as 1000000000. Now, we have 3 million instructions and we would use 24 million bytes (about 24 MB). This is quite a lot for a type of operation that actually doesn't do anything. As you can see, we don't use the result of the loop body. For the loop body, Swift does the same optimization as in previous example and we end up with an empty loop: for i in 0...1_000_000 { } The empty loop can be skipped as well. As a result, we have saved 24 MB of memory usage and 3 million method calls. Dangerous functions There are some functions and instructions that sometimes don't provide any value for the application but the Swift compiler can't skip them because that could have a very negative impact on performance. Console print Printing a statement to the console is usually used for debugging purposes. The print and debugPrint instructions aren't removed from the application in release mode. Let's explore this code: for i in 0...1_000_000 { print(i) } The Swift compiler treats print and debugPrint as valid and important instructions that can't be skipped. Even though this code does nothing, it can't be optimized, because Swift doesn't remove the print statement. As a result, we have 1 million unnecessary instructions. As you can see, even very simple code that uses the print statement could decrease an application's performance very drastically. The loop with the 1_000_000 print statement takes 5 seconds, and that's a lot. It's even worse if you run it in Xcode; it would take up to 50 seconds. It gets all the more worse if you add a print instruction to the nothing method of an Object class from the previous example: func nothing() { print(x + y + z) } In that case, a loop in which we create an instance of Object and call nothing can't be eliminated because of the print instruction. Even though Swift can't eliminate the execution of that code completely, it does optimization by removing the creation instance of Object and calling the nothing method, and turns it into simple loop operation. The compiled code after optimization looks like this: // Initial Source Code for i in 0...1_000 { let object = Object(x: i) object.nothing() object.nothing() } // Optimized Code var x = 0, y = 0, z = 0 for i in 0...1_000_000 { x = i y = x * 2 z = y * 2 print(x + y + z) print(x + y + z) } As you can see, this code is far from perfect and has a lot of instructions that actually don't give us any value. There is a way to improve this code, so the Swift compiler does the same optimization as without print. Removing print logs To solve this performance problem, we have to remove the print statements from the code before compiling it. There are different ways of doing this. Comment out The first idea is to comment out all print statements of the code in release mode: //print("A") This will work but the next time when you want to enable logs, you will need to uncomment that code. This is a very bad and painful practice. But there is a better solution to it. Commented code is bad practice in general. You should be using a source code version control system, such as Git, instead. In this way, you can safely remove the unnecessary code and find it in the history if you need it later. Using a build configuration We can enable print only in debug mode. To do this, we will use a build configuration to conditionally exclude some code. First, we need to add a Swift compiler custom flag. To do this, select a project target and then go to Build Settings | Other Swift Flags. In the Swift Compiler - Custom Flags section and add the –D DEBUG flag for debug mode, like this: After this, you can use the DEBUG configuration flag to enable code only in debug mode. We will define our own print function. It will generate a print statement only in debug mode. In release mode, this function will be empty, and the Swift compiler will successfully eliminate it: func D_print(items: Any..., separator: String = " ", terminator: String = "n") { #if DEBUG print(items, separator: separator, terminator: terminator) #endif } Now, everywhere instead of print, we will use D_print: func nothing() { D_print(x + y + z) } You can also create a similar D_debugPrint function. Swift is very smart and does a lot of optimization, but we also have to make our code clear for people to read and for the compiler to optimize. Using a preprocessor adds complexity to your code. Use it wisely and only in situations when normal if conditions won't work, for instance, in our D_print example. Improving speed There are a few techniques that can simply improve code performance. Let's proceed directly to the first one. final You can create a function and property declaration with the final attribute. Adding the final attribute makes it non-overridable. The subclasses can't override that method or property. When you make a method non-overridable, there is no need to store it in vtable and the call to that function can be performed directly without any function address lookup in vtable: class Animal { final var name: String = "" final func feed() { } } As you have seen, final methods perform faster than non-final methods. Even such small optimization could improve an application's performance. It not only improves performance but also makes the code more secure. This way, you prevent a method from being overridden and prevent unexpected and incorrect behavior. Enabling the Whole Module Optimization setting would achieve very similar optimization results, but it's better to mark a function and property declaration explicitly as final, which would reduce the compiler's work and speed up the compilation. The compilation time for big projects with Whole Module Optimization could be up to 5 minutes in Xcode 7. Inline functions As you have seen, Swift can do optimization and inline some function calls. This way, there is no performance penalty for calling a function. You can manually enable or disable inline functions with the @inline attribute: @inline(__always) func someFunc () { } @inline(never) func someFunc () { } Even though you can manually control inline functions, it's usually better to leave it to the Swift complier to do this. Depending on the optimization settings, the Swift compiler applies different inlining techniques. The use-case for @inline(__always) would be very simple one-line functions that you always want to be inline. Value objects and reference objects There are many benefits of using immutable value types. Value objects make code not only safer and clearer but also faster. They have better speed and performance than reference objects; here is why. Memory allocation A value object can be allocated in the stack memory instead of the heap memory. Reference objects need to be allocated in the heap memory because they can be shared between many owners. Because value objects have only one owner, they can be allocated safely in the stack. Stack memory is way faster than heap memory. The second advantage is that value objects don't need reference counting memory management. As they can have only one owner, there is no such thing as reference counting for value objects. With Automatic Reference Counting (ARC) we don't think much about memory management, and it mostly looks transparent for us. Even though code looks the same when using reference objects and value objects, ARC adds extra retain and release method calls for reference objects. Avoiding Objective-C In most cases, Objective-C, with its dynamic runtime, performs slower than Swift. The interoperability between Swift and Objective-C is done so seamlessly that sometimes we may use Objective-C types and its runtime in the Swift code without knowing it. When you use Objective-C types in Swift code, Swift actually uses the Objective-C runtime for method dispatch. Because of that, Swift can't do the same optimization as for pure Swift types. Lets take a look at a simple example: for _ in 0...100 { _ = NSObject() } Let's read this code and make some assumptions about how the Swift compiler would optimize it. The NSObject instance is never used in the loop body, so we could eliminate the creation of an object. After that, we will have an empty loop; this can be eliminated as well. So, we remove all of the code from execution, but actually no code gets eliminated. This happens because Objective-C types use dynamic runtime method dispatch, called message sending. All standard frameworks, such as Foundation and UIKit, are written in Objective-C, and all types such as NSDate, NSURL, UIView, and UITableView use the Objective-C runtime. They do not perform as fast as Swift types, but we get all of these frameworks available for usage in Swift, and this is great. There is no way to remove the Objective-C dynamic runtime dispatch from Objective-C types in Swift, so the only thing we can do is learn how to use them wisely. Summary In this article, we covered many powerful features of Swift related to Swift's performance and gave some tips on how to solve performance-related issues. Resources for Article: Further resources on this subject: Flappy Swift[article] Profiling an app[article] Network Development with Swift [article]
Read more
  • 0
  • 0
  • 14276

article-image-how-add-unit-tests-sails-framework-application
Luis Lobo
26 Sep 2016
8 min read
Save for later

How to add Unit Tests to a Sails Framework Application

Luis Lobo
26 Sep 2016
8 min read
There are different ways to implement Unit Tests for a Node.js application. Most of them use Mocha, for their test framework, Chai as the assertion library, and some of them include Istanbul for Code Coverage. We will be using those tools, not entering in deep detail on how to use them but rather on how to successfully configure and implement them for a Sails project. 1) Creating a new application from scratch (if you don't have one already) First of all, let’s create a Sails application from scratch. The Sails version in use for this article is 0.12.3. If you already have a Sails application, then you can continue to step 2. Issuing the following command creates the new application: $ sails new sails-test-article Once we create it, we will have the following file structure: ./sails-test-article ├── api │ ├── controllers │ ├── models │ ├── policies │ ├── responses │ └── services ├── assets │ ├── images │ ├── js │ │ └── dependencies │ ├── styles │ └── templates ├── config │ ├── env │ └── locales ├── tasks │ ├── config │ └── register └── views 2) Create a basic test structure We want a folder structure that contains all our tests. For now we will only add unit tests. In this project we want to test only services and controllers. Add necessary modules npm install --save-dev mocha chai istanbul supertest Folder structure Let's create the test folder structure that supports our tests: mkdir -p test/fixtures test/helpers test/unit/controllers test/unit/services After the creation of the folders, we will have this structure: ./sails-test-article ├── api [...] ├── test │ ├── fixtures │ ├── helpers │ └── unit │ ├── controllers │ └── services └── views We now create a mocha.opts file inside the test folder. It contains mocha options, such as a timeout per test run, that will be passed by default to mocha every time it runs. One option per line, as described in mocha opts. --require chai --reporter spec --recursive --ui bdd --globals sails --timeout 5s --slow 2000 Up to this point, we have all our tools set up. We can do a very basic test run: mocha test It prints out this: 0 passing (2ms) Normally, Node.js applications define a test script in the packages.json file. Edit it so that it now looks like this: "scripts": { "debug": "node debug app.js", "start": "node app.js", "test": "mocha test" } We are ready for the next step. 3) Bootstrap file The boostrap.js file is the one that defines the environment that all tests use. Inside it, we define before and after events. In them, we are starting and stopping (or 'lifting' and 'lowering' in Sails language) our Sails application. Since Sails makes globally available models, controller, and services at runtime, we need to start them here. var sails = require('sails'); var _ = require('lodash'); global.chai = require('chai'); global.should = chai.should(); before(function (done) { // Increase the Mocha timeout so that Sails has enough time to lift. this.timeout(5000); sails.lift({ log: { level: 'silent' }, hooks: { grunt: false }, models: { connection: 'unitTestConnection', migrate: 'drop' }, connections: { unitTestConnection: { adapter: 'sails-disk' } } }, function (err, server) { if (err) returndone(err); // here you can load fixtures, etc. done(err, sails); }); }); after(function (done) { // here you can clear fixtures, etc. if (sails && _.isFunction(sails.lower)) { sails.lower(done); } }); This file will be required on each of our tests. That way, each test can individually be run if needed, or run as a whole. 4) Services tests We now are adding two models and one service to show how to test services: Create a Comment model in /api/models/Comment.js: /** * Comment.js */ module.exports = { attributes: { comment: {type: 'string'}, timestamp: {type: 'datetime'} } }; /** * Comment.js */ module.exports = { attributes: { comment: {type: 'string'}, timestamp: {type: 'datetime'} } }; /** * Comment.js */ module.exports = { attributes: { comment: {type: 'string'}, timestamp: {type: 'datetime'} } }; /** * Comment.js */ module.exports = { attributes: { comment: {type: 'string'}, timestamp: {type: 'datetime'} } }; Create a Post model in /api/models/Post.js: /** * Post.js */ module.exports = { attributes: { title: {type: 'string'}, body: {type: 'string'}, timestamp: {type: 'datetime'}, comments: {model: 'Comment'} } }; Create a Post service in /api/services/PostService.js: /** * PostService * * @description :: Service that handles posts */ module.exports = { getPostsWithComments: function () { return Post .find() .populate('comments'); } }; To test the Post service, we need to create a test for it in /test/unit/services/PostService.spec.js. In the case of services, we want to test business logic. So basically, you call your service methods and evaluate the results using an assertion library. In this case, we are using Chai's should. /* global PostService */ // Here is were we init our 'sails' environment and application require('../../bootstrap'); // Here we have our tests describe('The PostService', function () { before(function (done) { Post.create({}) .then(Post.create({}) .then(Post.create({}) .then(function () { done(); }) ) ); }); it('should return all posts with their comments', function (done) { PostService .getPostsWithComments() .then(function (posts) { posts.should.be.an('array'); posts.should.have.length(3); done(); }) .catch(done); }); }); We can now test our service by running: npm test The result should be similar to this one: > sails-test-article@0.0.0 test /home/lobo/dev/luislobo/sails-test-article > mocha test The PostService ✓ should return all posts with their comments 1 passing (979ms) 5) Controllers tests In the case of controllers, we want to validate that our requests are working, that they are returning the correct error codes and the correct data. In this case, we make use of the SuperTest module, which provides HTTP assertions. We add now a Post controller with this content in /api/controllers/PostController.js: /** * PostController */ module.exports = { getPostsWithComments: function (req, res) { PostService.getPostsWithComments() .then(function (posts) { res.ok(posts); }) .catch(res.negotiate); } }; And now we create a Post controller test in: /test/unit/controllers/PostController.spec.js: // Here is were we init our 'sails' environment and application var supertest = require('supertest'); require('../../bootstrap'); describe('The PostController', function () { var createdPostId = 0; it('should create a post', function (done) { var agent = supertest.agent(sails.hooks.http.app); agent .post('/post') .set('Accept', 'application/json') .send({"title": "a post", "body": "some body"}) .expect('Content-Type', /json/) .expect(201) .end(function (err, result) { if (err) { done(err); } else { result.body.should.be.an('object'); result.body.should.have.property('id'); result.body.should.have.property('title', 'a post'); result.body.should.have.property('body', 'some body'); createdPostId = result.body.id; done(); } }); }); it('should get posts with comments', function (done) { var agent = supertest.agent(sails.hooks.http.app); agent .get('/post/getPostsWithComments') .set('Accept', 'application/json') .expect('Content-Type', /json/) .expect(200) .end(function (err, result) { if (err) { done(err); } else { result.body.should.be.an('array'); result.body.should.have.length(1); done(); } }); }); it('should delete post created', function (done) { var agent = supertest.agent(sails.hooks.http.app); agent .delete('/post/' + createdPostId) .set('Accept', 'application/json') .expect('Content-Type', /json/) .expect(200) .end(function (err, result) { if (err) { returndone(err); } else { returndone(null, result.text); } }); }); }); After running the tests again: npm test We can see that now we have 4 tests: > sails-test-article@0.0.0 test /home/lobo/dev/luislobo/sails-test-article > mocha test The PostController ✓ should create a post ✓ should get posts with comments ✓ should delete post created The PostService ✓ should return all posts with their comments 4 passing (1s) 6) Code Coverage Finally, we want to know if our code is being covered by our unit tests, with the help of Istanbul. To generate a report, we just need to run: istanbul cover _mocha test Once we run it, we will have a result similar to this one: The PostController ✓ should create a post ✓ should get posts with comments ✓ should delete post created The PostService ✓ should return all posts with their comments 4 passing (1s) ============================================================================= Writing coverage object [/home/lobo/dev/luislobo/sails-test-article/coverage/coverage.json] Writing coverage reports at [/home/lobo/dev/luislobo/sails-test-article/coverage] ============================================================================= =============================== Coverage summary =============================== Statements : 26.95% ( 45/167 ) Branches : 3.28% ( 4/122 ) Functions : 35.29% ( 6/17 ) Lines : 26.95% ( 45/167 ) ================================================================================ In this case, we can see that the percentages are not very nice. We don't have to worry much about these since most of the “not covered” code is in /api/policies and /api/responses. You can check that result in a file that was created after istanbul ran, in ./coverage/lcov-report/index.html. If you remove those folders and run it again, you will see the difference. rm -rf api/policies api/responses istanbul cover _mocha test ⬡ 4.4.2 [±master ●●●] Now the result is much better: 100% coverage! The PostController ✓ should create a post ✓ should get posts with comments ✓ should delete post created The PostService ✓ should return all posts with their comments 4 passing (1s) ============================================================================= Writing coverage object [/home/lobo/dev/luislobo/sails-test-article/coverage/coverage.json] Writing coverage reports at [/home/lobo/dev/luislobo/sails-test-article/coverage] ============================================================================= =============================== Coverage summary =============================== Statements : 100% ( 24/24 ) Branches : 100% ( 0/0 ) Functions : 100% ( 4/4 ) Lines : 100% ( 24/24 ) ================================================================================ Now if you check the report again, you will see a different picture: Coverage report You can get the source code for each of the steps here. I hope you enjoyed the post! Reference Sails documentation on Testing your code Follows recommendations from Sails author, Mike McNeil, Adds some extra stuff based on my own experience developing applications using Sails Framework. About the author Luis Lobo Borobia is the CTO at FictionCity.NET, mentor and advisor, independent software engineer, consultant, and conference speaker. He has a background as a software analyst and designer—creating, designing, and implementing software products and solutions, frameworks, and platforms for several kinds of industries. In the last few years, he has focused on research and development for the Internet of Things using the latest bleeding-edge software and hardware technologies available.
Read more
  • 0
  • 1
  • 14264

article-image-reactive-programming-and-flux-architecture
Packt
18 Feb 2016
12 min read
Save for later

Reactive Programming and the Flux Architecture

Packt
18 Feb 2016
12 min read
Reactive programming, including functional reactive programming as will be discussed later, is a programming paradigm that can be used in multiparadigm languages such as JavaScript, Python, Scala, and many more. It is primarily distinguished from imperative programming, in which a statement does something by what are called side effects, in literature, about functional and reactive programming. Please note, though, that side effects here are not what they are in common English, where all medications have some effects, which are the point of taking the medication, and some other effects are unwanted but are tolerated for the main benefit. For example, Benadryl is taken for the express purpose of reducing symptoms of airborne allergies, and the fact that Benadryl, in a way similar to some other allergy medicines, can also cause drowsiness is (or at least was; now it is also sold as a sleeping aid) a side effect. This is unwelcome but tolerated as the lesser of two evils by people, who would rather be somewhat tired and not bothered by allergies than be alert but bothered by frequent sneezing. Medication side effects are rarely the only thing that would ordinarily be considered side effects by a programmer. For them, side effects are the primary intended purpose and effect of a statement, often implemented through changes in the stored state for a program. (For more resources related to this topic, see here.) Reactive programming has its roots in the observer pattern, as discussed in Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides's classic book Design Patterns: Elements of Reusable Object-Oriented Software (the authors of this book are commonly called GoF or Gang of Four). In the observer pattern, there is an observable subject. It has a list of listeners, and notifies all of them when it has something to publish. This is somewhat simpler than the publisher/subscriber (PubSub) pattern, not having potentially intricate filtering of which messages reach which subscriber which is a normal feature to include. Reactive programming has developed a life of its own, a bit like the MVC pattern-turned-buzzword, but it is best taken in connection with the broader context explored in GoF. Reactive programming, including the ReactJS framework (which is explored in this title), is intended to avoid the shared mutable state and be idempotent. This means that, as with RESTful web services, you will get the same result from a function whether you call it once or a hundred times. Pete Hunt formerly of Facebook—perhaps the face of ReactJS as it now exists—has said that he would rather be predictable than right. If there is a bug in his code, Hunt would rather have the interface fail the same way every single time than go on elaborate hunts for heisenbugs. These are bugs that manifest only in some special and slippery edge cases, and are explored later in this book. ReactJS is called the V of MVC. That is, it is intended for user interface work and has little intentions of offering other standard features. But just as the painter Charles Cézanne said about the impressionist painter Claude Monet, "Monet is only an eye, but what an eye!" about MVC and ReactJS, we can say, "ReactJS is only a view, but what a view!" In this chapter, we will be covering the following topics: Declarative programming The war on heisenbugs The Flux Architecture From pit of despair to the pit of success A complete UI teardown and rebuild JavaScript as a Domain-specific Language (DSL) Big-Coffee Notation ReactJS, the library explored in this book, was developed by Facebook and made open source in the not-too-distant past. It is shaped by some of Facebook's concerns about making a large-scale site that is safe to debug and work on, and also allowing a large number of programmers to work on different components without having to store brain-bending levels of complexity in their heads. The quotation "Simplicity is the lack of interleaving," which can be found in the videos at http://facebook.github.io/react, is not about how much or how little stuff there is on an absolute scale, but about how many moving parts you need to juggle simultaneously to work on a system (See the section on Big-Coffee Notation for further reflections). Declarative programming Probably, the biggest theoretical advantage of the ReactJS framework is that the programming is declarative rather than imperative. In imperative programming, you specify what steps need to be done; declarative programming is the programming in which you specify what needs to be accomplished without telling how it needs to be done. It may be difficult at first to shift from an imperative paradigm to a declarative paradigm, but once the shift has been made, it is well worth the effort involved to get there. Familiar examples of declarative paradigms, as opposed to imperative paradigms, include both SQL and HTML. An SQL query would be much more verbose if you had to specify how exactly to find records and filter them appropriately, let alone say how indices are to be used, and HTML would be much more verbose if, instead of having an IMG tag, you had to specify how to render an image. Many libraries, for instance, are more declarative than a rolling of your own solution from scratch. With a library, you are more likely to specify only what needs to be done and not—in addition to this—how to do it. ReactJS is not in any sense the only library or framework that is intended to provide a more declarative JavaScript, but this is one of its selling points, along with other better specifics that it offers to help teams work together and be productive. And again, ReactJS has emerged from some of Facebook's efforts in managing bugs and cognitive load while enabling developers to contribute a lot to a large-scale project. The war on Heisenbugs In modern physics, Heisenberg's uncertainty principle loosely says that there is an absolute theoretical limit to how well a particle's position and velocity can be known. Regardless of how good a laboratory's measuring equipment gets, funny things will always happen when you try to pin things down too far. Heisenbugs, loosely speaking, are subtle, slippery bugs that can be very hard to pin down. They only manifest under very specific conditions and may even fail to manifest when one attempts to investigate them (note that this definition is slightly different from the jargon file's narrower and more specific definition at http://www.catb.org/jargon/html/H/heisenbug.html, which specifies that attempting to measure a heisenbug may suppress its manifestation). This motive—of declaring war on heisenbugs—stems from Facebook's own woes and experiences in working at scale and seeing heisenbugs keep popping up. One thing that Pete Hunt mentioned, in not a flattering light at all, was a point where Facebook's advertisement system was only understood by two engineers well enough who were comfortable with modifying it. This is an example of something to avoid. By contrast, looking at Pete Hunt's remark that he would "rather be predictable than right" is a statement that if a defectively designed lamp can catch fire and burn, his much, much rather have it catch fire and burn immediately, the same way, every single time, than at just the wrong point of the moon phase have something burn. In the first case, the lamp will fail testing while the manufacturer is testing, the problem will be noticed and addressed, and lamps will not be shipped out to the public until the defect has been property addressed. The opposite Heisenbug case is one where the lamp will spark and catch fire under just the wrong conditions, which means that a defect will not be caught until the laps have shipped and started burning customers' homes down. "Predictable" means "fail the same way, every time, if it's going to fail at all." "Right means "passes testing successfully, but we don't know whether they're safe to use [probably they aren't]." Now, he ultimately does, in fact, care about being right, but the choices that Facebook has made surrounding React stem from a realization that being predictable is a means to being right. It's not acceptable for a manufacturer to ship something that will always spark and catch fire when a consumer plugs it in. However, being predictable moves the problems to the front and the center, rather than being the occasional result of subtle, hard-to-pin-down interactions that will have unacceptable consequences in some rare circumstances. The choices in Flux and ReactJS are designed to make failures obvious and bring them to the surface, rather than them being manifested only in the nooks and crannies of a software labyrinth. Facebook's war on the shared mutable state is illustrated in the experience that they had regarding a chat bug. The chat bug became an overarching concern for its users. One crucial moment of enlightenment for Facebook came when they announced a completely unrelated feature, and the first comment on this feature was a request to fix the chat; it got 898 likes. Also, they commented that this was one of the more polite requests. The problem was that the indicator for unread messages could have a phantom positive message count when there were no messages available. Things came to a point where people seemed not to care about what improvements or new features Facebook was adding, but just wanted them to fix the phantom message count. And they kept investigating and kept addressing edge cases, but the phantom message count kept on recurring. The solution, besides ReactJS, was found in the flux pattern, or architecture, which is discussed in the next section. After a situation where not too many people felt comfortable making changes, all of a sudden, many more people felt comfortable making changes. These things simplified matters enough that new developers tended not to really need the ramp-up time and treatment that had previously been given. Furthermore, when there was a bug, the more experienced developers could guess with reasonable accuracy what part of the system was the culprit, and the newer developers, after working on a bug, tended to feel confident and have a general sense of how the system worked. The Flux Architecture One of the ways in which Facebook, in relation to ReactJS, has declared war on heisenbugs is by declaring war on the mutable state. Flux is an architecture and a pattern, rather than a specific technology, and it can be used (or not used) with ReactJS. It is somewhat like MVC, equivalent to a loose competitor to that approach, but it is very different from a simple MVC variant and is designed to have a pit of success that provides unidirectional data flow like this: from the action to the dispatcher, then to the store, and finally to the view (but some people have said that these two are so different that a direct comparison between Flux and MVC, in terms of trying to identify what part of Flux corresponds to what conceptual hook in MVC, is not really that helpful). Actions are like events—they are fed into a top funnel. Dispatchers go through the funnels and can not only pass actions but also make sure that no additional actions are dispatched until the previous one has completely settled out. Stores have similarities and difference to models. They are like models in that they keep track of state. They are unlike models in that they have only getters, not setters, which stops the effect of any part of the program with access to a model being able to change anything in its setters. Stores can accept input, but in a very controlled way, and in general a store is not at the mercy of anything possessing a reference to it. A view is what displays the current output based on what is obtained from stores. Stores, compared to models in some respects, have getters but not setters. This helps foster a kind of data flow that is not at the mercy of anyone who has access to a setter. It is possible for events to be percolated as actions, but the dispatcher acts as a traffic cop and ensures that new actions are processed only after the stores are completely settled. This de-escalates the complexity considerably. Flux simplified interactions so that Facebook developers no longer had subtle edge cases and bug that kept coming back—the chat bug was finally dead and has not come back. Summary We just took a whirlwind tour of some of the theory surrounding reactive programming with ReactJS. This includes declarative programming, one of the selling points of ReactJS that offers something easier to work with at the end than imperative programming. The war on heisenbugs, is an overriding concern surrounding decisions made by Facebook, including ReactJS. This takes place through Facebook's declared war on the shared mutable state. The Flux Architecture is used by Facebook with ReactJS to avoid some nasty classes of bugs. To learn more about Reactive Programming and the Flux Architecture, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Reactive Programming with JavaScript (https://www.packtpub.com/application-development/reactive-programming-javascript) Clojure Reactive Programming (https://www.packtpub.com/web-development/clojure-reactive-programming) Resources for Article:   Further resources on this subject: The Observer Pattern [article] Concurrency in Practice [article] Introduction to Akka [article]
Read more
  • 0
  • 0
  • 14262

article-image-exception-handling-python
Packt
17 Aug 2016
10 min read
Save for later

Exception Handling with Python

Packt
17 Aug 2016
10 min read
In this article, by Ninad Sathaye, author of the book, Learning Python Application Development, you will learn techniques to make the application more robust by handling exceptions Specifically, we will cover the following topics: What are the exceptions in Python? Controlling the program flow with the try…except clause Dealing with common problems by handling exceptions Creating and using custom exception classes (For more resources related to this topic, see here.) Exceptions Before jumping straight into the code and fixing these issues, let's first understand what an exception is and what we mean by handling an exception. What is an exception? An exception is an object in Python. It gives us information about an error detected during the program execution. The errors noticed while debugging the application were unhandled exceptions as we didn't see those coming. Later in the article,you will learn the techniques to handle these exceptions. The ValueError and IndexErrorexceptions seen in the earlier tracebacks are examples of built-in exception types in Python. In the following section, you will learn about some other built-in exceptions supported in Python. Most common exceptions Let's quickly review some of the most frequently encountered exceptions. The easiest way is to try running some buggy code and let it report the problem as an error traceback! Start your Python interpreter and write the following code: Here are a few more exceptions: As you can see, each line of the code throws a error tracebackwith an exception type (shown highlighted). These are a few of the built-in exceptions in Python. A comprehensive list of built-in exceptions can be found in the following documentation:https://docs.python.org/3/library/exceptions.html#bltin-exceptions Python provides BaseException as the base class for all built-in exceptions. However, most of the built-in exceptions do not directly inherit BaseException. Instead, these are derived from a class called Exception that in turn inherits from BaseException. The built-in exceptions that deal with program exit (for example, SystemExit) are derived directly from BaseException. You can also create your own exception class as a subclass of Exception. You will learn about that later in this article. Exception handling So far, we saw how the exceptions occur. Now, it is time to learn how to use thetry…except clause to handle these exceptions. The following pseudocode shows a very simple example of the try…except clause: Let's review the preceding code snippet: First, the program tries to execute the code inside thetryclause. During this execution, if something goes wrong (if an exception occurs), it jumps out of this tryclause. The remaining code in the try block is not executed. It then looks for an appropriate exception handler in theexceptclause and executes it. The exceptclause used here is a universal one. It will catch all types of exceptions occurring within thetryclause. Instead of having this "catch-all" handler, a better practice is to catch the errors that you anticipate and write an exception handling code specific to those errors. For example, the code in thetryclause might throw an AssertionError. Instead of using the universalexcept clause, you can write a specific exception handler, as follows: Here, we have an except clause that exclusively deals with AssertionError. What it also means is that any error other than the AssertionError will slip through as an unhandled exception. For that, we need to define multipleexceptclauses with different exception handlers. However, at any point of time, only one exception handler will be called. This can be better explained with an example. Let's take a look at the following code snippet: Thetry block calls solve_something(). This function accepts a number as a user input and makes an assertion that the number is greater than zero. If the assertion fails, it jumps directly to the handler, except AssertionError. In the other scenario, with a > 0, the rest of the code in solve_something() is executed. You will notice that the variable xis not defined, which results in NameError. This exception is handled by the other exception clause, except NameError. Likewise, you can define specific exception handlers for anticipated errors. Raising and re-raising an exception Theraisekeyword in Python is used to force an exception to occur. Put another way, it raises an exception. The syntax is simple; just open the Python interpreter and type: >>> raise AssertionError("some error message") This produces the following error traceback: Traceback (most recent call last): File "<stdin>", line 1, in <module> AssertionError : some error message In some situations, we need to re-raise an exception. To understand this concept better, here is a trivial scenario. Suppose, in thetryclause, you have an expression that divides a number by zero. In ordinary arithmetic, this expression has no meaning. It's a bug! This causes the program to raise an exception called ZeroDivisionError. If there is no exception handling code, the program will just print the error message and terminate. What if you wish to write this error to some log file and then terminate the program? Here, you can use anexceptclause to log the error first. Then, use theraisekeyword without any arguments to re-raise the exception. The exception will be propagated upwards in the stack. In this example, it terminates the program. The exception can be re-raised with the raise keyword without any arguments. Here is an example that shows how to re-raise an exception: As can be seen, adivision by zeroexception is raised while solving the a/b expression. This is because the value of variable b is set to 0. For illustration purposes, we assumed that there is no specific exception handler for this error. So, we will use the general except clause where the exception is re-raised after logging the error. If you want to try this yourself, just write the code illustrated earlier in a new Python file, and run it from a terminal window. The following screenshot shows the output of the preceding code: The else block of try…except There is an optionalelseblock that can be specified in the try…except clause. The elseblock is executed only ifno exception occurs in the try…except clause. The syntax is as follows: Theelseblock is executed before thefinallyclause, which we will study next. finally...clean it up! There is something else to add to the try…except…else story:an optional finally clause. As the name suggests, the code within this clause is executed at the end of the associated try…except block. Whether or not an exception is raised, the finally clause, if specified, willcertainly get executed at the end of thetry…except clause. Imagine it as anall-weather guaranteegiven by Python! The following code snippet shows thefinallyblock in action: Running this simple code will produce the following output: $ python finally_example1.py Enter a number: -1 Uh oh..Assertion Error. Do some special cleanup The last line in the output is theprintstatement from the finally clause. The code snippets with and without the finally clause are are shown in the following screenshot. The code in the finallyclause is assured to be executed in the end, even when the except clause instructs the code to return from the function. Thefinallyclause is typically used to perform clean-up tasks before leaving the function. An example use case is to close a database connection or a file. However, note that, for this purpose you can also use thewith statement in Python. Writing a new exception class It is trivial to create a new exception class derived from Exception. Open your Python interpreter and create the following class: >>> class GameUnitError(Exception): ... pass ... >>> That's all! We have a new exception class,GameUnitError, ready to be deployed. How to test this exception? Just raise it. Type the following line of code in your Python interpreter: >>> raise GameUnitError("ERROR: some problem with game unit") Raising the newly created exception will print the following traceback: >>> raise GameUnitError("ERROR: some problem with game unit") Traceback (most recent call last): File "<stdin>", line 1, in <module> __main__.GameUnitError: ERROR: some problem with game unit Copy the GameUnitError class into its own module, gameuniterror.py, and save it in the same directory as attackoftheorcs_v1_1.py. Next, update the attackoftheorcs_v1_1.py file to include the following changes: First, add the following import statement at the beginning of the file: from gameuniterror import GameUnitError The second change is in the AbstractGameUnit.heal method. The updated code is shown in the following code snippet. Observe the highlighted code that raises the custom exception whenever the value ofself.health_meterexceeds that of self.max_hp. With these two changes, run heal_exception_example.py created earlier. You will see the new exception being raised, as shown in the following screenshot: Expanding the exception class Can we do something more with the GameUnitError class? Certainly! Just like any other class, we can define attributes and use them. Let's expand this class further. In the modified version, it will accept an additional argument and some predefined error code. The updated GameUnitError class is shown in the following screenshot: Let's take a look at the code in the preceding screenshot: First, it calls the __init__method of the Exceptionsuperclass and then defines some additional instance variables. A new dictionary object,self.error_dict, holds the error integer code and the error information as key-value pairs. The self.error_message stores the information about the current error depending on the error code provided. The try…except clause ensures that error_dict actually has the key specified by thecodeargument. It doesn't in the except clause, we just retrieve the value with default error code of 000. So far, we have made changes to the GameUnitError class and the AbstractGameUnit.heal method. We are not done yet. The last piece of the puzzle is to modify the main program in the heal_exception_example.py file. The code is shown in the following screenshot: Let's review the code: As the heal_by value is too large, the heal method in the try clause raises the GameUnitError exception. The new except clause handles the GameUnitError exception just like any other built-in exceptions. Within theexceptclause, we have twoprintstatements. The first one prints health_meter>max_hp!(recall that when this exception was raised in the heal method, this string was given as the first argument to the GameUnitError instance). The second print statement retrieves and prints the error_message attribute of the GameUnitError instance. We have got all the changes in place. We can run this example form a terminal window as: $ python heal_exception_example.py The output of the program is shown in the following screenshot: In this simple example, we have just printed the error information to the console. You can further write verbose error logs to a file and keep track of all the error messages generated while the application is running. Summary This article served as an introduction to the basics of exception handling in Python. We saw how the exceptions occur, learned about some common built-in exception classes, and wrote simple code to handle these exceptions using thetry…except clause. The article also demonstrated techniques, such as raising and re-raising exceptions, using thefinally clause, and so on. The later part of the article focused on implementing custom exception classes. We defined a new exception class and used it for raising custom exceptions for our application. With exception handling, the code is in a better shape. Resources for Article: Further resources on this subject: Mining Twitter with Python – Influence and Engagement [article] Exception Handling in MySQL for Python [article] Python LDAP applications - extra LDAP operations and the LDAP URL library [article]
Read more
  • 0
  • 0
  • 14238

article-image-running-firefox-os-simulators-webide
Packt
12 Oct 2015
9 min read
Save for later

Running Firefox OS Simulators with WebIDE

Packt
12 Oct 2015
9 min read
In this article by Tanay Pant, the author of the book, Learning Firefox OS Application Development, you will learn how to use WebIDE and its features. We will start by installing Firefox OS simulators in the WebIDE so that we can run and test Firefox OS applications in it. Then, we will study how to install and create new applications with WebIDE. Finally, we will cover topics such as using developer tools for applications that run in WebIDE, and uninstalling applications in Firefox OS. In brief, we will go through the following topics: Getting to know about WebIDE Installing Firefox OS simulator Installing and creating new apps with WebIDE Using developer tools inside WebIDE Uninstalling applications in Firefox OS (For more resources related to this topic, see here.) Introducing WebIDE It is now time to have a peek at Firefox OS. You can test your applications in two ways, either by running it on a real device or by running it in Firefox OS Simulator. Let's go ahead with the latter option since you might not have a Firefox OS device yet. We will use WebIDE, which comes preinstalled with Firefox, to accomplish this task. If you haven't installed Firefox yet, you can do so from https://www.mozilla.org/en-US/firefox/new/. WebIDE allows you to install one or several runtimes (different versions) together. You can use WebIDE to install different types of applications, debug them using Firefox's Developer Tools Suite, and edit the applications/manifest using the built-in source editor. After you install Firefox, open WebIDE. You can open it by navigating to Tools | Web Developer | WebIDE. Let's now take a look at the following screenshot of WebIDE: You will notice that on the top-right side of your window, there is a Select Runtime option. When you click on it, you will see the Install Simulator option. Select that option, and you will see a page titled Extra Components. It presents a list of Firefox OS simulators. We will install the latest stable and unstable versions of Firefox OS. We installed two versions of Firefox OS because we would need both the latest and stable versions to test our applications in the future. After you successfully install both the simulators, click on Select Runtime. This will now show both the OS versions listed, as shown in the following screenshot:. Let's open Firefox OS 3.0. This will open up a new window titled B2G. You should now explore Firefox OS, take a look at its applications, and interact with them. It's all HTML, CSS and JavaScript. Wonderful, isn't it? Very soon, you will develop applications like these:` Installing and creating new apps using WebIDE To install or create a new application, click on Open App in the top-left corner of the WebIDE window. You will notice that there are three options: New App, Open Packaged App, and Open Hosted App. For now, think of Hosted apps like websites that are served from a web server and are stored online in the server itself but that can still use appcache and indexeddb to store all their assets and data offline, if desired. Packaged apps are distributed in a .zip format and they can be thought of as the source code of the website bundled and distributed in a ZIP file. Let's now head to the first option in the Open App menu, which is New App. Select the HelloWorld template, enter Project Name, and click on OK. After completing this, the WebIDE will ask you about the directory where you want to store the application. I have made a new folder named Hello World for this purpose on the desktop. Now, click on Open button and finally, click again on the OK button. This will prepare your app and show details, such as Title, Icon, Description, Location and App ID of your application. Note that beneath the app title, it says Packaged Web. Can you figure out why? As we discussed, it is because of the fact that we are not serving the application online, but from a packaged directory that holds its source code. This covers the right-hand side panel. In the left-hand side panel, we have the directory listing of the application. It contains an icon folder that holds different-sized icons for different screen resolutions It also contains the app.js file, which is the engine of the application and will contain the functionality of the application; index.html, which will contain the markup data for the application; and finally, the manifest.webapp file, which contains crucial information and various permissions about the application. If you click on any filename, you will notice that the file opens in an in-browser editor where you can edit the files to make changes to your application and save them from here itself. Let's make some edits in the application— in app.js and index.html. I have replaced World with Firefox everywhere to make it Hello Firefox. Let's make the same changes in the manifest file. The manifest file contains details of your application, such as its name, description, launch path, icons, developer information, and permissions. These details are used to display information about your application in the WebIDE and Firefox Marketplace. The manifest file is in JSON format. I went ahead and edited developer information in the application as well, to include my name and my website. After saving all the files, you will notice that the information of the app in the WebIDE has changed! It's now time to run the application in Firefox OS. Click on Select Runtime and fire up Firefox OS 3.0. After it is launched, click on the Play button in the WebIDE hovering on which is the prompt that says Install and Run. Doing this will install and launch the application on your simulator! Congratulations, you installed your first Firefox OS application! Using developer tools inside WebIDE WebIDE allows you to use Firefox's awesome developer tools for applications that run in the Simulator via WebIDE as well. To use them, simply click on the Settings icon (which looks like a wrench) beside the Install and Run icon that you had used to get the app installed and running. The icon says Debug App on hovering the cursor over it. Click on this to reveal developer tools for the app that is running via WebIDE. Click on Console, and you will see the message Hello Firefox, which we gave as the input in console.log() in the app.js file. Note that it also specifies the App ID of our application while displaying Hello Firefox. You may have noticed in the preceding illustration that I sent a command via the console alert('Hello Firefox'); and it simultaneously executed the instruction in the app running in the simulator. As you may have noticed, Firefox OS customizes the look and feel of components, such as the alert box (this is browser based). Our application is running in an iframe in Gaia. Every app, including the keyboard application, runs in an iframe for security reasons. You should go through these tools to get a hang of the debugging capabilities if you haven't done so already! One more important thing that you should keep in mind is that inline scripts (for example, <a href="#" onclick="alert(this)">Click Me</a>) are forbidden in Firefox OS apps, due to Content Security Policy (CSP) restrictions. CSP restrictions include the remote scripts, inline scripts, javascript URIs, function constructor, dynamic code execution, and plugins, such as Flash or Shockwave. Remote styles are also banned. Remote Web Workers and eval() operators are not allowed for security reasons and they show 400 error and security errors respectively upon usage. You are warned about CSP violations when submitting your application to the Firefox OS Marketplace. CSP warnings in the validator will not impact whether your app is accepted into the Marketplace. However, if your app is privileged and violates the CSP, you will be asked to fix this issue in order to get your application accepted. Browsing other runtime applications You can also take a look at the source code of the preinstalled/runtime apps that are present in Firefox OS or Gaia, to be precise. For example, the following is an illustration that shows how to open them: You can click on the Hello World button (in the same place where Open App used to exist), and this will show you the whole list of Runtime Apps as shown in the preceding illustration. I clicked on the Camera application and it showed me the source code of its main.js file. It's completely okay if you are daunted by the huge file. If you find these runtime applications interesting and want to contribute to them, then you can refer to Mozilla Developer Network's articles on developing Gaia, which you can find at https://developer.mozilla.org/en-US/Firefox_OS/Developing_Gaia. Our application looks as follows in the App Launcher of the operating system: Uninstalling applications in Firefox OS You can remove the project from WebIDE by clicking on the Remove Project button in the home page of the application. However, this will not uninstall the application from Firefox OS Simulator. The uninstallation system of the operating system is quite similar to iOS. You just have to double tap in OS X to get the Edit screen, from where you can click on the cross button on the top-left of the app icon to uninstall the app. You will then get a confirmation screen that warns you that all the data of the application will also be deleted along with the app. This will take you back to the Edit screen where you can click on Done to get back to the home screen. Summary In this article, you learned about WebIDE, how to install Firefox OS simulator in WebIDE, using Firefox OS and installing applications in it, and creating a skeleton application using WebIDE. You then learned how to use developer tools for applications that run in the simulator, browsing other preinstalled runtime applications present in Firefox OS. Finally, you learned about removing a project from WebIDE and uninstalling an application from the operating system. Resources for Article: Further resources on this subject: Learning Node.js for Mobile Application Development [Article] Introducing Web Application Development in Rails [Article] One-page Application Development [Article]
Read more
  • 0
  • 0
  • 14147
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introduction-scala
Packt
01 Nov 2016
8 min read
Save for later

Introduction to Scala

Packt
01 Nov 2016
8 min read
In this article by Diego Pacheco, the author of the book, Building applications with Scala, we will see the following topics: Writing a program for Scala Hello World using the REPL Scala language – the basics Scala variables – var and val Creating immutable variables (For more resources related to this topic, see here.) Scala Hello World using the REPL Let's get started. Go ahead, open your terminal, and type $ scala in order to open the Scala REPL. Once the REPL is open, you can just type "Hello World". By doing this, you are performing two operations – eval and print. The Scala REPL will create a variable called res0 and store your string there, and then it will print the content of the res0 variable. Scala REPL Hello World program $ scala Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77). Type in expressions for evaluation. Or try :help. scala> "Hello World" res0: String = Hello World scala> Scala is a hybrid language, which means it is both object-oriented (OO) and functional. You can create classes and objects in Scala. Next, we will create a complete Hello World application using classes. Scala OO Hello World program $ scala Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77). Type in expressions for evaluation. Or try :help. scala> object HelloWorld { | def main(args:Array[String]) = println("Hello World") | } defined object HelloWorld scala> HelloWorld.main(null) Hello World scala> First things first, you need to realize that we use the word object instead of class. The Scala language has different constructs, compared with Java. Object is a Singleton in Scala. It's the same as you code the Singleton pattern in Java. Next, we see the word def that is used in Scala to create functions. In this program, we create the main function just as we do in Java, and we call the built-in function, println, in order to print the String Hello World. Scala imports some java objects and packages by default. Coding in Scala does not require you to type, for instance, System.out.println("Hello World"), but you can if you want to, as shown in the following:. $ scala Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77). Type in expressions for evaluation. Or try :help. scala> System.out.println("Hello World") Hello World scala> We can and we will do better. Scala has some abstractions for a console application. We can write this code with less lines of code. To accomplish this goal, we need to extend the Scala class App. When we extend from App, we are performing inheritance, and we don't need to define the main function. We can just put all the code on the body of the class, which is very convenient, and which makes the code clean and simple to read. Scala HelloWorld App in the Scala REPL $ scala Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77). Type in expressions for evaluation. Or try :help. scala> object HelloWorld extends App { | println("Hello World") | } defined object HelloWorld scala> HelloWorld object HelloWorld scala> HelloWorld.main(null) Hello World scala> After coding the HelloWorld object in the Scala REPL, we can ask the REPL what HelloWorld is and, as you might realize, the REPL answers that HelloWorld is an object. This is a very convenient Scala way to code console applications because we can have a Hello World application with just three lines of code. Sadly, the same program in Java requires way more code, as you will see in the next section. Java is a great language for performance, but it is a verbose language compared with Scala. Java Hello World application package scalabook.javacode.chap1; public class HelloWorld { public static void main(String args[]){ System.out.println("Hello World"); } } The Java application required six lines of code, while in Scala, we were able to do the same with 50% less code(three lines of code). This is a very simple application; when we are coding complex applications, the difference gets bigger as a Scala application ends up with way lesser code than that of Java. Remember that we use an object in Scala in order to have a Singleton(Design Pattern that makes sure you have just one instance of a class), and if we want to do the same in Java, the code would be something like this: package scalabook.javacode.chap1; public class HelloWorldSingleton { private HelloWorldSingleton(){} private static class SingletonHelper{ private static final HelloWorldSingleton INSTANCE = new HelloWorldSingleton(); } public static HelloWorldSingleton getInstance(){ return SingletonHelper.INSTANCE; } public void sayHello(){ System.out.println("Hello World"); } public static void main(String[] args) { getInstance().sayHello(); } } It's not just about the size of the code, but it is all about consistency and the language providing more abstractions for you. If you write less code, you will have less bugs in your software at the end of the day. Scala language – the basics Scala is a statically typed language with a very expressive type system, which enforces abstractions in a safe yet coherent manner. All values in Scala are Java objects (but primitives that are unboxed at runtime) because at the end of the day, Scala runs on the Java JVM. Scala enforces immutability as a core functional programing principle. This enforcement happens in multiple aspects of the Scala language, for instance, when you create a variable, you do it in an immutable way, and when you use a collection, you use an immutable collection. Scala also lets you use mutable variables and mutable structures, but it favors immutable ones by design. Scala variables – var and val When you are coding in Scala, you create variables using either the var operator or the val operator. The var operator allows you to create mutable states, which is fine as long as you make it local, stick to the core functional programing principles, and avoid mutable shared state. Using var in the Scala REPL $ scala Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77). Type in expressions for evaluation. Or try :help. scala> var x = 10 x: Int = 10 scala> x res0: Int = 10 scala> x = 11 x: Int = 11 scala> x res1: Int = 11 scala> However, Scala has a more interesting construct called val. Using the val operator makes your variables immutable, which means that you can't change their values after you set them. If you try to change the value of a val variable in Scala, the compiler will give you an error. As a Scala developer, you should use val as much as possible because that's a good functional programing mindset, and it will make your programs better and more correct. In Scala, everything is an object; there are no primitives – the var and val rules apply for everything, be it Int, String, or even a class. Using val in the Scala REPL $ scala Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77). Type in expressions for evaluation. Or try :help. scala> val x = 10 x: Int = 10 scala> x res0: Int = 10 scala> x = 11 <console>:12: error: reassignment to val x = 11 ^ scala> x res1: Int = 10 scala> Creating immutable variables Right. Now let's see how we can define the most common types in Scala, such as Int, Double, Boolean, and String. Remember that you can create these variables using val or var, depending on your requirement. Scala variable types at the Scala REPL $ scala Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77). Type in expressions for evaluation. Or try :help. scala> val x = 10 x: Int = 10 scala> val y = 11.1 y: Double = 11.1 scala> val b = true b: Boolean = true scala> val f = false f: Boolean = false scala> val s = "A Simple String" s: String = A Simple String scala> For these variables, we did not define the type. The Scala language figures it out for us. However, it is possible to specify the type if you want. In Scala, the type comes after the name of the variable, as shown in the following section. Scala variables with explicit typing at the Scala REPL $ scala Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77). Type in expressions for evaluation. Or try :help. scala> val x:Int = 10 x: Int = 10 scala> val y:Double = 11.1 y: Double = 11.1 scala> val s:String = "My String " s: String = "My String " scala> val b:Boolean = true b: Boolean = true scala> Summary In this article, we learned about some basic constructs and concepts of the Scala language, with functions, collections, and OO in Scala. Resources for Article: Further resources on this subject: Making History with Event Sourcing [article] Creating Your First Plug-in [article] Content-based recommendation [article]
Read more
  • 0
  • 0
  • 14114

article-image-deploying-html5-applications-gnome
Packt
28 May 2013
10 min read
Save for later

Deploying HTML5 Applications with GNOME

Packt
28 May 2013
10 min read
(For more resources related to this topic, see here.) Before we start Most of the discussions in this article require a moderate knowledge of HTML5, JSON, and common client-side JavaScript programming. One particular exercise uses JQuery and JQuery Mobile to show how a real HTML5 application will be implemented. Embedding WebKit What we need to learn first is how to embed a WebKit layout engine inside our GTK+ application. Embedding WebKit means we can use HTML and CSS as our user interface instead of GTK+ or Clutter. Time for action – embedding WebKit With WebKitGTK+, this is a very easy task to do; just follow these steps: Create an empty Vala project without GtkBuilder and no license. Name it hello-webkit. Modify configure.ac to include WebKitGTK+ into the project. Find the following line of code in the file: PKG_CHECK_MODULES(HELLO_WEBKIT, [gtk+-3.0]) Remove the previous line and replace it with the following one: PKG_CHECK_MODULES(HELLO_WEBKIT, [gtk+-3.0 webkitgtk-3.0]) Modify Makefile.am inside the src folder to include WebKitGTK into the Vala compilation pipeline. Find the following lines of code in the file: hello_webkit_VALAFLAGS = --pkg gtk+-3.0 Remove it and replace it completely with the following lines: hello_webkit_VALAFLAGS = --vapidir . --pkg gtk+-3.0 --pkg webkit-1.0 --pkglibsoup-2.4 Fill the hello_webkit.vala file inside the src folder with the following lines: using GLib;using Gtk;using WebKit;public class Main : WebView{public Main (){load_html_string("<h1>Hello</h1>","/");}static int main (string[] args){Gtk.init (ref args);var webView = new Main ();var window = new Gtk.Window();window.add(webView);window.show_all ();Gtk.main ();return 0;}} Copy the accompanying webkit-1.0.vapi file into the src folder. We need to do this, unfortunately, because the webkit-1.0.vapi file distributed with many distributions is still using GTK+ Version 2. Run it, you will see a window with the message Hello, as shown in the following screenshot: What just happened? What we need to do first is to include WebKit into our namespace, so we can use all the functions and classes from it. using WebKit; Our class is derived from the WebView widget. It is an important widget in WebKit, which is capable of showing a web page. Showing it means not only parsing and displaying the DOM properly, but that it's capable to run the scripts and handle the styles referred to by the document. The derivation declaration is put in the class declaration as shown next: public class Main : WebView In our constructor, we only load a string and parse it as an HTML document. The string is Hello, styled with level 1 heading. After the execution of the following line, WebKit will parse and display the presentation of the HTML5 code inside its body: public Main (){load_html_string("<h1>Hello</h1>","/");} In our main function, what we need to do is create a window to put our WebView widget into. After adding the widget, we need to call the show_all() function in order to display both the window and the widget. static int main (string[] args){Gtk.init (ref args);var webView = new Main ();var window = new Gtk.Window();window.add(webView); The window content now only has a WebView widget as its sole displaying widget. At this point, we no longer use GTK+ to show our UI, but it is all written in HTML5. Runtime with JavaScriptCore An HTML5 application is, most of the time, accompanied by client-side scripts that are written in JavaScript and a set of styling definition written in CSS3. WebKit already provides the feature of running client-side JavaScript (running the script inside the web page) with a component called JavaScriptCore, so we don't need to worry about it. But how about the connection with the GNOME platform? How to make the client-side script access the GNOME objects? One approach is that we can expose our objects, which are written in Vala so that they can be used by the client-side JavaScript. This is where we will utilize JavaScriptCore. We can think of this as a frontend and backend architecture pattern. All of the code of business process which touch GNOME will reside in the backend. They are all written in Vala and run by the main process. On the opposite side, the frontend, the code is written in JavaScript and HTML5, and is run by WebKit internally. The frontend is what the user sees while the backend is what is going on behind the scene. Consider the following diagram of our application. The backend part is grouped inside a grey bordered box and run in the main process. The frontend is outside the box and run and displayed by WebKit. From the diagram, we can see that the frontend creates an object and calls a function in the created object. The object we create is not defined in the client side, but is actually created at the backend. We ask JavaScriptCore to act as a bridge to connect the object created at the backend to be made accessible by the frontend code. To do this, we wrap the backend objects with JavaScriptCore class and function definitions. For each object we want to make available to frontend, we need to create a mapping in the JavaScriptCore side. In the following diagram, we first map the MyClass object, then the helloFromVala function, then the intFromVala, and so on: Time for action – calling the Vala object from the frontend Now let's try and create a simple client-side JavaScript code and call an object defined at the backend: Create an empty Vala project, without GtkBuilder and no license. Name it hello-jscore. Modify configure.ac to include WebKitGTK+ exactly like our previous experiment. Modify Makefile.am inside the src folder to include WebKitGTK+ and JSCore into the Vala compilation pipeline. Find the following lines of code in the file: hello_jscore_VALAFLAGS = --pkg gtk+-3.0 Remove it and replace it completely with the following lines: hello_jscore_VALAFLAGS = --vapidir . --pkg gtk+-3.0 --pkg webkit-1.0 --pkglibsoup-2.4 --pkg javascriptcore Fill the hello_jscore.vala file inside the src folder with the following lines of code: using GLib;using Gtk;using WebKit;using JSCore;public class Main : WebView{public Main (){load_html_string("<h1>Hello</h1>" +"<script>alert(HelloJSCore.hello())</script>","/");window_object_cleared.connect ((frame, context) => {setup_js_class ((JSCore.GlobalContext) context);});}public static JSCore.Value helloFromVala (Context ctx,JSCore.Object function,JSCore.Object thisObject,JSCore.Value[] arguments,out JSCore.Value exception) {exception = null;var text = new String.with_utf8_c_string ("Hello fromJSCore");return new JSCore.Value.string (ctx, text);}static const JSCore.StaticFunction[] js_funcs = {{ "hello", helloFromVala, PropertyAttribute.ReadOnly },{ null, null, 0 }};static const ClassDefinition js_class = {0, // versionClassAttribute.None, // attribute"HelloJSCore", // classNamenull, // parentClassnull, // static valuesjs_funcs, // static functionsnull, // initializenull, // finalizenull, // hasPropertynull, // getPropertynull, // setPropertynull, // deletePropertynull, // getPropertyNamesnull, // callAsFunctionnull, // callAsConstructornull, // hasInstancenull // convertToType};void setup_js_class (GlobalContext context) {var theClass = new Class (js_class);var theObject = new JSCore.Object (context, theClass,context);var theGlobal = context.get_global_object ();var id = new String.with_utf8_c_string ("HelloJSCore");theGlobal.set_property (context, id, theObject,PropertyAttribute.None, null);}static int main (string[] args){Gtk.init (ref args);var webView = new Main ();var window = new Gtk.Window();window.add(webView);window.show_all ();Gtk.main ();return 0;}} Copy the accompanied webkit-1.0.vapi and javascriptcore.vapi files into the src folder. The javascriptcore.vapi file is needed because some distributions do not have this .vapi file in their repositories. Run the application. The following output will be displayed: What just happened? The first thing we do is include the WebKit and JavaScriptCore namespaces. Note, in the following code snippet, that the JavaScriptCore namespace is abbreviated as JSCore: using WebKit;using JSCore; In the Main function, we load HTML content into the WebView widget. We display a level 1 heading and then call the alert function. The alert function displays a string returned by the hello function inside the HelloJSCore class, as shown in the following code: public Main (){load_html_string("<h1>Hello</h1>" +"<script>alert(HelloJSCore.hello())</script>","/"); In the preceding code snippet, we can see that the client-side JavaScript code is as follows: alert(HelloJSCore.hello()) And we can also see that we call the hello function from the HelloJSCore class as a static function. It means that we don't instantiate the HelloJSCore object before calling the hello function. In WebView, we initialize the class defined in the Vala class when we get the window_object_cleared signal. This signal is emitted whenever a page is cleared. The initialization is done in setup_js_class and this is also where we pass the JSCore global context into. The global context is where JSCore keeps the global variables and functions. It is accessible by every code. window_object_cleared.connect ((frame, context) => {setup_js_class ((JSCore.GlobalContext)context);}); The following snippet of code contains the function, which we want to expose to the clientside JavaScript. The function just returns a Hello from JSCore string message: public static JSCore.Value helloFromVala (Context ctx,JSCore.Object function,JSCore.Object thisObject,JSCore.Value[] arguments,out JSCore.Value exception) {exception = null;var text = new String.with_utf8_c_string ("Hello from JSCore");return new JSCore.Value.string (ctx, text);} Then we need to put a boilerplate code that is needed to expose the function and other members of the class. The first part of the code is the static function index. This is the mapping between the exposed function and the name of the function defined in the wrapper. In the following example, we map the hello function, which can be used in the client side, with the helloFromVala function defined in the code. The index is then ended with null to mark the end of the array: static const JSCore.StaticFunction[] js_funcs = {{ "hello", helloFromVala, PropertyAttribute.ReadOnly },{ null, null, 0 }}; The next part of the code is the class definition. It is about the structure that we have to fill, so that JSCore would know about the class. All of the fields are filled with null, except for those we want to make use of. In this example, we use the static function for the hello function. So we fill the static function field with js_funcs, which we defined in the preceding code snippet: static const ClassDefinition js_class = {0, // versionClassAttribute.None, // attribute"HelloJSCore", // classNamenull, // parentClassnull, // static valuesjs_funcs, // static functionsnull, // initializenull, // finalizenull, // hasPropertynull, // getPropertynull, // setPropertynull, // deletePropertynull, // getPropertyNamesnull, // callAsFunctionnull, // callAsConstructornull, // hasInstancenull // convertToType}; After that, in the the setup_js_class function, we set up the class to be made available in the JSCore global context. First, we create JSCore.Class with the class definition structure we filled previously. Then, we create an object of the class, which is created in the global context. Last but not least, we assign the object with a string identifier, which is HelloJSCore. After executing the following code, we will be able to refer HelloJSCore on the client side: void setup_js_class (GlobalContext context) {var theClass = new Class (js_class);var theObject = new JSCore.Object (context, theClass,context);var theGlobal = context.get_global_object ();var id = new String.with_utf8_c_string ("HelloJSCore");theGlobal.set_property (context, id, theObject,PropertyAttribute.None, null);}
Read more
  • 0
  • 0
  • 14101

Packt
07 Aug 2013
13 min read
Save for later

.NET 4.5 Parallel Extensions – Async

Packt
07 Aug 2013
13 min read
(For more resources related to this topic, see here.) Creating an async method The TAP is a new pattern for asynchronous programming in .NET Framework 4.5. It is based on a task, but in this case a task doesn't represent work which will be performed on another thread. In this case, a task is used to represent arbitrary asynchronous operations. Let's start learning how async and await work by creating a Windows Presentation Foundation (WPF ) application that accesses the web using HttpClient. This kind of network access is ideal for seeing TAP in action. The application will get the contents of a classic book from the web, and will provide a count of the number of words in the book. How to do it… Let's go to Visual Studio 2012 and see how to use the async and await keywords to maintain a responsive UI by doing the web communications asynchronously. Start a new project using the WPF Application project template and assign WordCountAsync as Solution name . Begin by opening MainWindow.xaml and adding the following XAML to create a simple user interface containing Button and TextBlock: <Window x_Class="WordCountAsync.MainWindow" Title="WordCountAsync" Height="350" Width="525"> <Grid> <Button x_Name="StartButton" Content="Start" HorizontalAlignment="Left" Margin="219,195,0,0" VerticalAlignment="Top" Width="75" RenderTransformOrigin="-0.2,0.45" Click="StartButton_Click"/> <TextBlock x_Name="TextResults" HorizontalAlignment="Left" Margin="60,28,0,0" TextWrapping="Wrap" VerticalAlignment="Top" Height="139" Width="411"/> </Grid> </Window> Next, open up MainWindow.xaml.cs. Go to the Project and add a reference to System.Net.Http. Add the following using directives to the top of your MainWindow class: using System; using System.Linq; using System.Net.Http; using System.Threading.Tasks; using System.Windows; At the top of the MainWindow class, add a character array constant that will be used to split the contents of the book into a word array. char[] delimiters = { ' ', ',', '.', ';', ':', '-', '_', '/', '\u000A' }; Add a button click event for the StartButton and add the async modifier to the method signature to indicate that this will be a async method. Please note that async methods that return void are normally only used for event handlers, and should be avoided. private async void StartButton_Click(object sender, RoutedEventArgs e) { } Next, let's create a async method called GetWordCountAsync that returns Task<int>. This method will create HttpClient and call its GetStringAsync method to download the book contents as a string. It will then use the Split method to split the string into a wordArray. We can return the count of the wordArray as our return value. public async Task<int> GetWordCountAsync() { TextResults.Text += "Getting the word count for Origin of Species...\n"; var client = new HttpClient(); var bookContents = await client.GetStringAsync(@"http://www.gutenberg.org/files/2009/2009.txt"); var wordArray = bookContents.Split(delimiters, StringSplitOptions.RemoveEmptyEntries); return wordArray.Count(); } Finally, let's complete the implementation of our button click event. The Click event handler will just call GetWordCountAsync with the await keyword and display the results to TextBlock. private async void StartButton_Click(object sender, RoutedEventArgs e) { var result = await GetWordCountAsync(); TextResults.Text += String.Format("Origin of Species word count: {0}",result); } In Visual Studio 2012, press F5 to run the project. Click on the Start button, and your application should appear as shown in the following screenshot: How it works… In the TAP, asynchronous methods are marked with an async modifier. The async modifier on a method does not mean that the method will be scheduled to run asynchronously on a worker thread. It means that the method contains control flow that involves waiting for the result of an asynchronous operation, and will be rewritten by the compiler to ensure that the asynchronous operation can resume this method at the right spot. Let me try to put this a little more simply. When you add the async modifier to a method, it indicates that the method will wait on an asynchronous code to complete. This is done with the await keyword. The compiler actually takes the code that follows the await keyword in an async method and turns it into a continuation that will run after the result of the async operation is available. In the meantime, the method is suspended, and control returns to the method's caller. If you add the async modifier to a method, and then don't await anything, it won't cause an error. The method will simply run synchronously. An async method can have one of the three return types: void, Task, or Task<TResult>. As mentioned before, a task in this context doesn't mean that this is something that will execute on a separate thread. In this case, task is just a container for the asynchronous work, and in the case of Task<TResult>, it is a promise that a result value of type TResult will show up after the asynchronous operation completes. In our application, we use the async keyword to mark the button click event handler as asynchronous, and then we wait for the GetWordCountAsync method to complete by using the wait keyword. private async void StartButton_Click(object sender, RoutedEventArgs e) { StartButton.Enabled = false; var result = await GetWordCountAsync(); TextResults.Text += String.Format("Origin of Species word count: {0}", .................. result); StartButton.Enabled = true; } The code that follows the await keyword, in this case, the same line of code that updates TextBlock, is turned by the compiler into a continuation that will run after the integer result is available. If the Click event is fired again while this asynchronous task is in progress, another asynchronous task is created and awaited. To prevent this, it is a common practice to disable the button that is clicked. It is a convention to name an asynchronous method with an Async postfix, as we have done with GetWordCountAsync. Handling Exceptions in asynchronous code So how would you add Exception handling to code that is executed asynchronously? In previous asynchronous patterns, this was very difficult to achieve. In C# 5.0 it is much more straightforward because you just have to wrap the asynchronous function call with a standard try/catch block. On the surface this sounds easy, and it is, but there is more going on behind the scene that will be explained right after we build our next example application. For this recipe, we will return to our classic books word count scenario, and we will be handling an Exception thrown by HttpClient when it tries to get the book contents using an incorrect URL. How to do it… Let's build another WPF application and take a look at how to handle Exceptions when something goes wrong in one of our asynchronous methods. Start a new project using the WPF Application project template and assign AsyncExceptions as Solution name . Begin by opening MainWindow.xaml and adding the following XAML to create a simple user interface containing Button and a TextBlock: <Window x_Class="WordCountAsync.MainWindow" Title="WordCountAsync" Height="350" Width="525"> <Grid> <Button x_Name="StartButton" Content="Start" HorizontalAlignment="Left" Margin="219,195,0,0" VerticalAlignment="Top" Width="75" RenderTransformOrigin="-0.2,0.45" Click="StartButton_Click"/> <TextBlock x_Name="TextResults" HorizontalAlignment="Left" Margin="60,28,0,0" TextWrapping="Wrap" VerticalAlignment="Top" Height="139" Width="411"/> </Grid> </Window> Next, open up MainWindow.xaml.cs. Go to the Project Explorer , right-click on References , click on Framework from the menu on the left side of the Reference Manager , and then add a reference to System.Net.Http. Add the following using directives to the top of your MainWindow class: using System; using System.Linq; using System.Net.Http; using System.Threading.Tasks; using System.Windows; At the top of the MainWindow class, add a character array constant that will be used to split the contents of the book into a word array. char[] delimiters = { ' ', ',', '.', ';', ':', '-', '_', '/', '\u000A' }; Now let's create our GetWordCountAsync method. This method will be very similar to the last recipe, but it will be trying to access the book on an incorrect URL. The asynchronous code will be wrapped in a try/catch block to handle Exception. We will also use a finally block to dispose of HttpClient. public async Task<int> GetWordCountAsync() { ResultsTextBlock.Text += "Getting the word count for Origin of Species...\n"; var client = new HttpClient(); try { var bookContents = await client.GetStringAsync(@"http://www.gutenberg.org/files/2009/No_Book_Here.txt"); var wordArray = bookContents.Split(delimiters, StringSplitOptions.RemoveEmptyEntries); return wordArray.Count(); } catch (Exception ex) { ResultsTextBlock.Text += String.Format("An error has occurred: {0} \n", ex.Message); return 0; } finally { client.Dispose(); } } Finally, let create the Click event handler for our StartButton. This is pretty much the same as the last recipe, just wrapped in a try/catch block. Don't forget to add the async modifier to the method signature. private async void StartButton_Click(object sender, RoutedEventArgs e) { try { var result = await GetWordCountAsync(); ResultsTextBlock.Text += String.Format("Origin of Species word count: {0}", result); } catch(Exception ex) { ResultsTextBlock.Text += String.Format("An error has occurred: {0} \n", ex.Message); } } Now, in Visual Studio 2012, press F5 to run the project. Click on the Start button. Your application should appear as shown in the following screenshot: How it works… Wrapping your asynchronous code in a try/catch block is pretty easy. In fact, it hides some of the complex work Visual Studio 2012 to doing for us. To understand this, you need to think about the context in which your code is running. When the TAP is used in Windows Forms or WPF applications, there's already a context that the code is running in, such as the message loop UI thread. When async calls are made in those applications, the awaited code goes off to do its work asynchronously and the async method exits back to its caller. In other words, the program execution returns to the message loop UI thread. The Console applications don't have the concept of a context. When the code hits an awaited call inside the try block, it will exit back to its caller, which in this case is Main. If there is no more code after the awaited call, the application ends without the async method ever finishing. To alleviate this issue, Microsoft included async compatible context with the TAP that is used for Console apps or unit test apps to prevent this inconsistent behavior. This new context is called GeneralThreadAffineContext. Do you really need to understand these context issues to handle async Exceptions? No, not really. That's part of the beauty of the Task-based Asynchronous Pattern. Cancelling an asynchronous operation In .NET 4.5, asynchronous operations can be cancelled in the same way that parallel tasks can be cancelled, by passing in CancellationToken and calling the Cancel method on CancellationTokenSource. In this recipe, we are going to create a WPF application that gets the contents of a classic book over the web and performs a word count. This time though we are going to set up a Cancel button that we can use to cancel the async operation if we don't want to wait for it to finish. How to do it… Let's create a WPF application to show how we can add cancellation to our asynchronous methods. Start a new project using the WPF Application project template and assign AsyncCancellation as Solution name . Begin by opening MainWindow.xaml and adding the following XAML to create our user interface. In this case, the UI contains TextBlock, StartButton, and CancelButton. <Window x_Class="AsyncCancellation.MainWindow" Title="AsyncCancellation" Height="400" Width="599"> <Grid Width="600" Height="400"> <Button x_Name="StartButton" Content="Start" HorizontalAlignment="Left" Margin="142,183,0,0" VerticalAlignment="Top" Width="75" RenderTransformOrigin="-0.2,0.45" Click="StartButton_Click"/> <Button x_Name="CancelButton" Content="Cancel" HorizontalAlignment="Left" Margin="379,185,0,0" VerticalAlignment="Top" Width="75" Click="CancelButton_Click"/> <TextBlock x_Name="TextResult" HorizontalAlignment="Left" Margin="27,24,0,0" TextWrapping="Wrap" VerticalAlignment="Top" Height="135" Width="540"/> </Grid> </Window> Next, open up MainWindow.xaml.cs, click on the Project Explorer , and add a reference to System.Net.Http. Add the following using directives to the top of your MainWindow class: using System; using System.Linq; using System.Net.Http; using System.Threading.Tasks; using System.Windows; At the top of the MainWindow class, add a character array constant that will be used to split the contents of the book into a word array. char[] delimiters = { ' ', ',', '.', ';', ':', '-', '_', '/', '\u000A' }; Next, let's create the GetWordCountAsync method. This method is very similar to the method explained before. It needs to be marked as asynchronous with the async modifier and it returns Task<int>. This time however, the method takes a CancellationToken parameter. We also need to use the GetAsync method of HttpClient instead of the GetStringAsync method, because the former supports cancellation, whereas the latter does not. We will add a small delay in the method so we have time to cancel the operation before the download completes. public async Task<int> GetWordCountAsync(CancellationToken ct) { TextResult.Text += "Getting the word count for Origin of Species...\n"; var client = new HttpClient(); await Task.Delay(500); try { HttpResponseMessage response = await client.GetAsync(@"http://www.gutenberg.org/files/2009/2009.txt", ct); var words = await response.Content.ReadAsStringAsync(); var wordArray = words.Split(delimiters, StringSplitOptions.RemoveEmptyEntries); return wordArray.Count(); } finally { client.Dispose(); } } Now, let's create the Click event handler for our CancelButton. This method just needs to check if CancellationTokenSource is null, and if not, it calls the Cancel method. private void CancelButton_Click(object sender, RoutedEventArgs e) { if (cts != null) { cts.Cancel(); } } Ok, let's finish up by adding a Click event handler for StartButton. This method is the same as explained before, except we also have a catch block that specifically handles OperationCancelledException. Don't forget to mark the method with the async modifier. public async Task<int> GetWordCountAsync(CancellationToken ct) { TextResult.Text += "Getting the word count for Origin of Species...\n"; var client = new HttpClient(); await Task.Delay(500); try { HttpResponseMessage response = await client.GetAsync(@"http://www.gutenberg.org/files/2009/2009.txt", ct); var words = await response.Content.ReadAsStringAsync(); var wordArray = words.Split(delimiters, StringSplitOptions.RemoveEmptyEntries); return wordArray.Count(); } finally { client.Dispose(); } } In Visual Studio 2012, press F5 to run the project Click on the Start button, then the Cancel button. Your application should appear as shown in the following screenshot: How it works… Cancellation is an aspect of user interaction that you need to consider to build a professional async application. In this example, we implemented cancellation by using a Cancel button, which is one of the most common ways to surface cancellation functionality in a GUI application. In this recipe, cancellation follows a very common flow. The caller (start button click event handler) creates a CancellationTokenSource object. private async void StartButton_Click(object sender, RoutedEventArgs e) { cts = new CancellationTokenSource(); ... } The caller calls a cancelable method, and passes CancellationToken from CancellationTokenSource (CancellationTokenSource.Token). public async Task<int> GetWordCountAsync(CancellationToken ct ) { ... HttpResponseMessage response = await client.GetAsync(@"http://www.gutenberg.org/files/2009/2009.txt", ct ); ... } The cancel button click event handler requests cancellation using the CancellationTokenSource object (CancellationTokenSource.Cancel()). private void CancelButton_Click(object sender, RoutedEventArgs e) { if (cts != null) { cts.Cancel(); } } The task acknowledges the cancellation by throwing OperationCancelledException, which we handle in a catch block in the start button click event handler.
Read more
  • 0
  • 0
  • 13941

article-image-article-authorizations-in-sap-hana
Packt
16 Jul 2013
28 min read
Save for later

Authorizations in SAP HANA

Packt
16 Jul 2013
28 min read
(For more resources related to this topic, see here.) Roles In SAP HANA, as in most of SAP's software, authorizations are grouped into roles. A role is a collection of authorization objects, with their associated privileges. It allows us, as developers, to define self-contained units of authorization. In the same way that at the start of this book we created an attribute view allowing us to have a coherent view of our customer data which we could reuse at will in more advanced developments, authorization roles allow us to create coherent developments of authorization data which we can then assign to users at will, making sure that users who are supposed to have the same rights always have the same rights. If we had to assign individual authorization objects to users, we could be fairly sure that sooner or later, we would forget someone in a department, and they would not be able to access the data they needed to do their everyday work. Worse, we might not give quite the same authorizations to one person, and have to spend valuable time correcting our error when they couldn't see the data they needed (or worse, more dangerous and less obvious to us as developers, if the user could see more data than was intended). It is always a much better idea to group authorizations into a role and then assign the role to users, than assign authorizations directly to users. Assigning a role to a user means that when the user changes jobs and needs a new set of privileges; we can just remove the first role, and assign a second one. Since, we're just starting out using authorizations in SAP HANA, let's get into this good habit right from the start. It really will make our lives easier later on. Creating a role Role creation is done, like all other SAP HANA development, in the Studio. If your Studio is currently closed, please open it, and then select the Modeler perspective. In order to create roles, privileges, and users, you will yourself need privileges. Your SAP HANA user will need the ROLE ADMIN, USER ADMIN, and CREATE STRUCTURED PRIVILEGE system privileges in order to do the development work in this article. You will see in the Navigator panel we have a Security folder, as we can see here: Please find the Security folder and then expand this folder. You will see a subfolder called Roles. Right-click on the Roles folder and select New Role to start creating a role. On the screen which will open, you will see a number of tabs representing the different authorization objects we can create, as we can see here: We'll be looking at each of these in turn, in the following sections, so for the moment just give your role Name (BOOKUSER might be appropriate, if not very original). Granted roles Like many other object types in SAP HANA, once you have created a role, you can then use it inside another role. This onion-like arrangement makes authorizations a lot easier to manage. If we had, for example, a company with two teams: Sales   Purchasing   And two countries, say: France   Germany   We could create a role giving access to sales analytic views, one giving purchasing analytic views, one giving access to data for France, and one giving access to data for Germany. We could then create new roles, say Sales-France, which don't actually contain any authorization objects themselves, but contain only the Sales and the France roles. The role definition is much simpler to understand and to maintain than if we had directly created the Sales-France role and a Sales-Germany role with all the underlying objects. Once again, as with other development objects, creating small self-contained roles and reusing them when possible will make your (maintenance) life easier. In the Granted Roles tab we can see the list of subroles this main role contains. Note that this list is only a pointer, you cannot modify the actual authorizations and the other roles given here, you would need to open the individual role and make changes there. Part of roles The Part of Roles tab in the role definition screen is exactly the opposite of the Granted Roles tab. This tab lists all other roles of which this role is a subrole. It is very useful to track authorizations, especially when you find yourself in a situation where a user seems to have too many authorizations and can see data they shouldn't be able to see. You cannot manipulate this list as such, it exists for information only. If you want to make changes, you need to modify the main role of which this role is a subrole. SQL privileges An SQL privilege is the lowest level at which we can define restrictions for using database objects. SQL privileges apply to the simplest objects in the database such as schemas, tables and so on. No attribute, analytical, or calculation view can be seen by SQL privileges. This is not strictly true, though you can consider it so. What we have seen as an analytical view, for example, the graphical definition, the drag and drop, the checkboxes, has been transformed into a real database object in the _SYS_BIC schema upon activation. We could therefore define SQL privileges on this database object if we wanted, but this is not recommended and indeed limits the control we can have over the view. We'll see a little later that SAP HANA has much finer-grained authorizations for views than this. An important thing to note about SQL privileges is that they apply to the object on which they are defined. They restrict access to a given object itself, but do not at any point have any impact on the object's contents. For example, we can decide that one of our users can have access to the CUSTOMER table, but we couldn't restrict their access to only CUSTOMER values from the COUNTRY USA. SQL privileges can control access to any object under the Catalog node in the Navigator panel. Let's add some authorizations to our BOOK schema and its contents. At the top of the SQL Privileges tab is a green plus sign button. Now click on this button to get the Select Catalog Object dialog, shown here: As you can see in the screenshot, we have entered the two letters bo into the filter box at the top of the dialog. As soon as you enter at least two letters into this box, the Studio will attempt to find and then list all database objects whose name contains the two letters you typed. If you continue to type, the search will be refined further. The first item in the list shown is the BOOK schema we created right back at the start of the book in the Chapter 2, SAP HANA Studio - Installation and First Look . Please select the BOOK item, and then click on OK to add it to our new role: The first thing to notice is the warning icon on the SQL Privileges tab itself: This means that your role definition is incomplete, and the role cannot be activated and used as yet. On the right of the screen, a list of checkbox options has appeared. These are the individual authorizations appropriate to the SQL object you have selected. In order to grant rights to a user via a role, you need to decide which of these options to include in the role. The individual authorization names are self-explicit. For example, the CREATE ANY authorization allows creation of new objects inside a schema. The INSERT or SELECT authorization might at first seem unusual for a schema, as it's not an object which can support such instructions. However, the usage is actually quite elegant. If a user has INSERT rights on the schema BOOK, then they have INSERT rights on all objects inside the schema BOOK. Granting rights on the schema itself avoids having to specify the names of all objects inside the schema. It also future-proofs your authorization concept, since new objects created in the schema will automatically inherit from the existing authorizations you have defined. On the far right of the screen, alongside each authorization is a radio button which gives an additional privilege, the possibility for a given user to, in turn, give the rights to a second user. This is an option which should not be given to all users, and so should not be present in all roles you create; the right to attribute privileges to users should be limited to your administrators. If you give just any user the right to pass on their authorizations further, you will soon find that you are no longer able to determine who can do what in your database. For the moment we are creating a simple role to show the working of the authorization concept in SAP HANA, so we will check all the checkboxes, and leave the radio buttons at No : There are some SQL privileges which are necessary for any user to be able to do work in SAP HANA. These are listed below. They give access to the system objects describing the development models we create in SAP HANA, and if a user does not have these privileges, nothing will work at all, the user will not be authorized to do anything. The SQL privileges you will need to add to the role in order to give access to basic SAP HANA system objects are: The SELECT privilege on the _SYS_BI schema   The SELECT privilege on the _SYS_REPO schema   The EXECUTE privilege on the REPOSITORY_REST procedure   Please add these SQL privileges to your role now, in order to obtain the following result: As you can see with the configuration we have just done, SQL privileges allow a user to access a given object and allow specific actions on the object. They do not however allow us to specify particular authorizations to the contents of the object. In order to use such fine-grained rights, we need to create an analytic privilege, and then add it to our role, so let's do that now. Analytic privileges An analytic privilege is an artifact unique to SAP HANA, it is not part of the standard SQL authorization concept. Analytic privileges allow us to restrict access to certain values of a given attribute, analytic, or calculation view. This means that we can create one view, which by default shows all available data, and then restrict what is actually visible to different users. We could restrict visible data by company code, by country, or by region. For example, our users in Europe would be allowed to see and work with data from our customers in Europe, but not those in the USA. An analytic privilege is created through the Quick Launch panel of Modeler , so please open that view now (or switch to the Quick Launch tab if it's already open). You don't need to close the role definition tab that's already open, we can leave it for now, create our analytic privilege, and then come back to the role definition later. From the Quick Launch panel, select Analytic Privilege , and then Create . As usual with SAP HANA, we are asked to give Name , Description , and select a package for our object. We'll call it AP_EU (for analytic privilege, Europe), use the name as the description, and put it into our book package alongside our other developments. As is common in SAP HANA, we have the option of creating an analytic privilege from scratch (Create New ) or copying an existing privilege (Copy From ). We don't currently have any other analytic privileges in our development, so leave Create New selected, then click on Next to go to the second screen of the wizard, shown here: On this page of the dialog, we are prompted to add development models to the analytic privilege. This will then allow us to restrict access to given values of these models. In the previous screenshot, we have added the CUST_REV analytic view to the analytic privilege. This will allow us to restrict access to any value we specify of any of the fields visible in the view. To add a view to the analytic privilege, just find it in the left panel, click on its name and then click on the Add button. Once you have added the views you require for your authorizations, click on the Finish button at the bottom of the window to go to the next step. You will be presented with the analytic privilege development panel, reproduced here: This page allows us to define our analytic privilege completely. On the left we have the list of database views we have included in the analytic privilege. We can add more, or remove one, using the Add and Remove buttons. To the right, we can see the Associated Attributes Restrictions and Assign Restrictions boxes. These are where we define the restrictions to individual values, or sets of values. In the top box, Associated Attributes Restrictions , we define on which attributes we want to restrict access (country code or region, maybe). In the bottom box, Assign Restrictions , we define the individual values on which to restrict (for example, for company code, we could restrict to value 0001, or US22; for region, we could limit access to EU or USA). Let's add a restriction to the REGION field of our CUST_REV view now. Click on the Add button next to the Associated Attributes Restrictions box, to see the Select Object dialog: As can be expected, this dialog lists all the attributes in our analytic view. We just need to select the appropriate attribute and then click on OK to add it to the analytic privilege. Measures in the view are not listed in the dialog. We cannot restrict access to a view according to numeric values. We cannot therefore, make restrictions to customers with a revenue over 1 million Euros, for example. Please add the REGION field to the analytic privilege now. Once the appropriate fields have been added, we can define the restrictions to be applied to them. Click on the REGION field in the Associated Attributes Restrictions box, then on the Add button next to the Assign Restrictions box, to define the restrictions we want to apply. As we can see, restrictions can be defined according to the usual list of comparison operators. These are the same operators we used earlier to define a restricted column in our analytic views. In our example, we'll be restricting access to those lines with a REGION column equal to EU, so we'll select Equal . In the Value column, we can either type the appropriate value directly, or use the value help button, and the familiar Value Help Dialog which will appear, to select the value from those available in the view. Please add the EU value, either by typing it or by having SAP HANA find it for us, now. There is one more field which needs to be added to our analytic privilege, and the reason behind might seem at first a little strange. This point is valid for SAP HANA SP5, up to and including (at least) release 50 of the software. If this point turns out to be a bug, then it might not be necessary in later versions of the software. The field on which we want to restrict user actions (REGION) is not actually part of the analytic view itself. REGION, if you recall, is a field which is present in CUST_REV , thanks to the included attribute view CUST_ATTR . In its current state, the analytic privilege will not work, because no fields from the analytic view are actually present in the analytic privilege. We therefore need to add at least one of the native fields of the analytic view to the analytic privilege. We don't need to do any restriction on the field; however it needs to be in the privilege for everything to work as expected. This is hinted at in SAP Note 1809199, SAP HANA DB: debugging user authorization errors. Only if a view is included in one of the cube restrictions and at least one of its attribute is employed by one of the dimension restrictions, access to the view is granted by this analytical privilege. Not an explicit description of the workings of the authorization concept, but close. Our analytic view CUST_REV contains two native fields, CURRENCY and YEAR. You can add either of these to the analytic privilege. You do not need to assign any restrictions to the field; it just needs to be in the privilege. Here is the state of the analytic privilege when development work on it is finished: The Count column lists the number of restrictions in effect for the associated field. For the CURRENCY field, no restrictions are defined. We just need (as always) to activate our analytic privilege in order to be able to use it. The activation button is the same one as we have used up until now to activate the modeling views, the round green button with the right-facing white arrow at the top-right of the panel, which you can see on the preceding screenshot. Please activate the analytic privilege now. Once that has been done, we can add it to our role. Return to the Role tab (if you left it open) or reopen the role now. If you closed the role definition tab earlier, you can get back to our role by opening the Security node in the Navigator panel, then opening Roles, and double-clicking on the BOOKUSER role. In the Analytic Privileges tab of the role definition screen, click on the green plus sign at the top, to add an analytic privilege to our role. The analytic privilege we have just created is called AP_EU, so type ap_eu into the search box at the top of the dialog window which will open. As soon as you have typed at least two characters, SAP HANA will start searching for matching analytic privileges, and your AP_EU privilege will be listed, as we can see here: Click on OK to add the privilege to the role. We will see in a minute the effect our analytic privilege has on the rights of a particular user, but for the moment we can take a look at the second-to-last tab in the role definition screen, System Privileges . System privileges As its name suggests, system privileges gives to a particular user the right to perform specific actions on the SAP HANA system itself, not just on a given table or view. These are particular rights which should not be given to just any user, but should be reserved to those users who need to perform a particular task. We'll not be adding any of these privileges to our role, however we'll take a look at the available options and what they are used for. Click on the green plus-sign button at the top of the System Privileges tab to see a list of the available privileges. By default the dialog will do a search on all available values; there are only fifteen or so, but you can as usual filter them down if you require using the filter box at the top of the dialog: For a full list of the system privileges available and their uses, please refer to the SAP HANA SQL Reference, available on the help.sap.com website at http://help.sap.com/hana/html/sql_grant.html. Package privileges The last tab in the role definition screen concerns Package Privileges . These allow a given user to access those objects in a package. In our example, the package is called book, so if we add the book package to our role in the Package Privileges tab, we will see the following result: Assigning package privileges is similar to assigning SQL privileges we saw earlier. We first add the required object (here our book package), then we need to indicate exactly which rights we give to the role. As we can see in the preceding screenshot, we have a series of checkboxes on the right-hand side of the window. At least one of these checkboxes must be checked in order to save the role. The individual rights have names which are fairly self-explanatory. REPO.READ gives access to read the package, whereas REPO.EDIT_NATIVE_OBJECTS allows modification of objects, for example. The role we are creating is destined for an end user who will need to see the data in a role, but should not need to modify the data models in any way (and in fact we really don't want them to modify our data models, do we?). We'll just add the REPO.READ privilege, on our book package, to our role. Again we can decide whether the end user can in turn assign this privilege to others. And again, we don't need this feature in our role. At this point, our role is finished. We have given access to the SQL objects in the BOOK schema, created an analytic privilege which limits access to the Europe region in our CUST_REV model, and given read-only access to our book package. After activation (always) we'll be able to assign our role to a test user, and then see the effect our authorizations have on what the user can do and see. Please activate the role now. Users Users are probably the most important part of the authorization concept. They are where all our problems begin, and their attempts to do and see things they shouldn't are the main reason we have to spend valuable time defining authorizations in the first place. In technical terms, a user is just another database object. They are created, modified, and deleted in the same way a modeling view is. They have properties (their name and password, for example), and it is by modifying these properties that we influence the actions that the person who connects using the user can perform. Up until now we have been using the SYSTEM user (or the user that your database administrator assigned to you). This user is defined by SAP, and has basically the authorizations to do anything with the database. Use of this user is discouraged by SAP, and the author really would like to insist that you don't use it for your developments. Accidents happen, and one of the great things about authorizations is that they help to prevent accidents. If you try to delete an important object with the SYSTEM user, you will delete it, and getting it back might involve a database restore. If however you use a development user with less authorization, then you wouldn't have been allowed to do the deletion, saving a lot of tears. Of course, the question then arises, why have you been using the SYSTEM user for the last couple of hundred pages of development. The answer is simple: if the author had started the book with the authorizations article, not many readers would have gotten past page 10. Let's create a new user now, and assign the role we have just created. From the Navigator panel, open the Security node, right-click on User , and select New User from the menu to obtain the user creation screen as shown in the following screenshot: Defining a user requires remarkably little information: User Name : The login that the user will use. Your company might have a naming convention for users. Users might even already have a standard login they use to connect to other systems in your enterprise. In our example, we'll create a user with the (once again rather unimaginative) name of BOOKU.   Authentication : How will SAP HANA know that the user connecting with the name of ANNE really is Anne? There are three (currently) ways of authenticating a user with SAP HANA. Password : This is the most common authentication system, SAP HANA will ask Anne for her password when she connects to the system. Since Anne is the only person who knows her password, we can be sure that Anne really is ANNE, and let her connect and do anything the user ANNE is allowed to do. Passwords in SAP HANA have to respect a certain format. By default this format is one capital, one lowercase, one number, and at least eight characters. You can see and change the password policy in the system configuration. Double-click on the system name in the Navigator panel, click on the Configuration tab, type the word pass into the filter box at the top of the tab, and scroll down to indexserver.ini and then password policy . The password format in force on your system is listed as password_layout . By default this is A1a, meaning capitals, numbers, and lowercase letters are allowed. The value can also contain the # character, meaning that special characters must also be contained in the password. The only special characters allowed by SAP HANA are currently the underscore, dollar sign, and the hash character. Other password policy defaults are also listed on this screen, such as maximum_password_lifetime (the time after which SAP HANA will force you to change your password).   Kerberos and SAML : These authentication systems need to be set up by your network administrator and allow single sign-on in your enterprise. This means that SAP HANA will be able to see the Windows username that is connecting to the system. The database will assume that the authentication part (deciding whether Anne really is ANNE) has already been done by Windows, and let the user connect.     Session Client : As we saw when we created attribute and analytic views back at the start of the book, SAP HANA understands the notion of client, referring to a partition system of the SAP ERP database. In the SAP ERP, different users can work in different Clients. In our development, we filtered on Client 100. A much better way of handling filtering is to define the default client for a user when we define their account. The Session Client field can be filled with the ERP Client in which the user works. In this way we do not need to filter on the analytic models, we can leave their client value at Dynamic in the view, and the actual value to use will be taken from the user record. Once again this means maintenance of our developments is a lot simpler. If you like, you can take a few minutes at the end of this article to create a user with a session client value of 100, then go back and reset our attribute and analytic views' default client value to Dynamic, reactivate everything, and then do a data preview with your test user. The result should be identical to that obtained when the view was filtered on client 100. However, if you then create a second user with a session client of 200, this second user will see different data.   We'll create a user with a password login, so type a password for your user now. Remember to adhere to the password policy in force on your system. Also note that the user will be required to change their password on first login. At the bottom of the user definition screen, as we can see from the preceding screenshot, we have a series of tabs corresponding to the different authorizations we can assign to our user. These are the same tabs we saw earlier when defining a role. As explained at the beginning of this article, it is considered best practice to assign authorizations to a role and then the role to a user, rather than assign authorizations directly to a user; this makes maintenance easier. For this reason we will not be looking at the different tabs for assigning authorizations to our user, other than the first one, Granted Roles . The Granted Roles tab lists, and allows adding and removing roles from the list assigned to the user. By default when we create a user, they have no roles assigned, and hence have no authorizations at all in the system. They will be able to log in to SAP HANA but will be able to do no development work, and will see no data from the system. Please click on the green plus sign button in the Granted Roles tab of the user definition screen, to add a role to the user account. You will be provided with the Select Role dialog, shown in part here: This dialog has the familiar search box at the top, so typing the first few letters of a role name will bring up a list of matching roles. Here our role was called BOOKUSER, so please do a search for it, then select it in the list and click on OK to add it to the user account. Once that is done, we can test our user to verify that we can perform the necessary actions with the role and user we have just created. We just need, as with all objects in SAP HANA, to activate the user object first. As usual, this is done with the round green button with the right-facing white arrow at the top-right of the screen. Please do this now. Testing our user and role The only real way to check if the authorizations we have defined are appropriate to the business requirements is to create a user and then try out the role to see what the user can and cannot see and do in the system. The first thing to do is to add our new user to the Studio so we can connect to SAP HANA using this new user. To do this, in the Navigator panel, right click on the SAP HANA system name, and select Add Additional User from the menu which appears. This will give you the Add additional user dialog, shown in the following screenshot:     Enter the name of the user you just created (BOOKU) and the password you assigned to the user. You will be required to change the password immediately: Click on Finish to add the user to the Studio. You will see immediately in the Navigator panel that we can now work with either our SYSTEM user, or our BOOKU user: We can also see straight away that BOOKU is missing the privileges to perform or manage data backups; the Backup node is missing from the list for the BOOKU user. Let's try to do something with our BOOKU user and see how the system reacts. The way the Studio lets you handle multiple users is very elegant, since the tree structure of database objects is duplicated, one per user, you can see immediately how the different authorization profiles affect the different users. Additionally, if you request a data preview from the CUST_REV analytic view in the book package under the BOOKU user's node in the Navigator panel, you will see the data according to the BOOKU user's authorizations. Requesting the same data preview from the SYSTEM user's node will see the data according to SYSTEM's authorizations. Let's do a data preview on the CUST_REV view with the SYSTEM user, for reference: As we can see, there are 12 rows of data retrieved, and we have data from the EU and NAR regions. If we ask for the same data preview using our BOOKU user, we can see much less data: BOOKU can only see nine of the 12 data rows in our view, as no data from the NAR region is visible to the BOOKU user. This is exactly the result we aimed to achieve using our analytic privilege, in our role, assigned to our user. Summary In this article, we have taken a look at the different aspects of the authorization concept in SAP HANA. We examined the different authorization levels available in the system, from SQL privileges, analytic privileges, system privileges, and package privileges. We saw how to add these different authorization concepts to a role, a reusable group of authorizations. We went on to create a new user in our SAP HANA system, examining the different types of authentications available, and the assignment of roles to users. Finally, we logged into the Studio with our new user account, and found out the first-hand effect our authorizations had on what the user could see and do. In the next article, we will be working with hierarchical data, seeing what hierarchies can bring to our reporting applications, and how to make the best use of them. Resources for Article : Further resources on this subject: SAP Netweaver: Accessing the MDM System [Article] SAP HANA integration with Microsoft Excel [Article] Exporting SAP BusinessObjects Dashboards into Different Environments [Article]
Read more
  • 0
  • 2
  • 13915
article-image-using-web-api-extend-your-application
Packt
08 Sep 2016
14 min read
Save for later

Using Web API to Extend Your Application

Packt
08 Sep 2016
14 min read
In this article by Shahed Chowdhuri author of book ASP.Net Core Essentials, we will work through a working sample of a web API project. During this lesson, we will cover the following: Web API Web API configuration Web API routes Consuming Web API applications (For more resources related to this topic, see here.) Understanding a web API Building web applications can be a rewarding experience. The satisfaction of reaching a broad set of potential users can trump the frustrating nights spent fine-tuning an application and fixing bugs. But some mobile users demand a more streamlined experience that only a native mobile app can provide. Mobile browsers may experience performance issues in low-bandwidth situations, where HTML5 applications can only go so far with a heavy server-side back-end. Enter web API, with its RESTful endpoints, built with mobile-friendly server-side code. The case for web APIs In order to create a piece of software, years of wisdom tell us that we should build software with users in mind. Without use cases, its features are literally useless. By designing features around user stories, it makes sense to reveal public endpoints that relate directly to user actions. As a result, you will end up with a leaner web application that works for more users. If you need more convincing, here's a recap of features and benefits: It lets you build modern lightweight web services, which are a great choice for your application, as long as you don't need SOAP It's easier to work with than any past work you may have done with ASP.NET Windows Communication Foundation (WCF) services It supports RESTful endpoints It's great for a variety of clients, both mobile and web It's unified with ASP.NET MVC and can be included with/without your web application Creating a new web API project from scratch Let's build a sample web application named Patient Records. In this application, we will create a web API from scratch to allow the following tasks: Add a new patient Edit an existing patient Delete an existing patient View a specific patient or a list of patients These four actions make up the so-called CRUD operations of our system: to Create, Read, Update or Delete patient records. Following the steps below, we will create a new project in Visual Studio 2015: Create a new web API project. Add an API controller. Add methods for CRUD operations. The preceding steps have been expanded into detailed instructions with the following screenshots: In Visual Studio 2015, click File | New | Project. You can also press Ctrl+Shift+N on your keyboard. On the left panel, locate the Web node below Visual C#, then select ASP.NET Core Web Application (.NET Core), as shown in the following screenshot: With this project template selected, type in a name for your project, for examplePatientRecordsApi, and choose a location on your computer, as shown in the following screenshot: Optionally, you may select the checkboxes on the lower right to create a directory for your solution file and/or add your new project to source control. Click OK to proceed. In the dialog that follows, select Empty from the list of the ASP.NET Core Templates, then click OK, as shown in the following screenshot: Optionally, you can check the checkbox for Microsoft Azure to host your project in the cloud. Click OK to proceed. Building your web API project In the Solution Explorer, you may observe that your References are being restored. This occurs every time you create a new project or add new references to your project that have to be restored through NuGet,as shown in the following screenshot: Follow these steps, to fix your references, and build your Web API project: Rightclickon your project, and click Add | New Folder to add a new folder, as shown in the following screenshot: Perform the preceding step three times to create new folders for your Controllers, Models, and Views,as shown in the following screenshot: Rightclick on your Controllers folder, then click Add | New Item to create a new API controller for patient records on your system, as shown in the following screenshot: In the dialog box that appears, choose Web API Controller Class from the list of options under .NET Core, as shown in the following screenshot: Name your new API controller, for examplePatientController.cs, then click Add to proceed. In your new PatientController, you will most likely have several areas highlighted with red squiggly lines due to a lack of necessary dependencies, as shown in the following screenshot. As a result, you won't be able to build your project/solution at this time: In the next section, we will learn about how to configure your web API so that it has the proper references and dependencies in its configuration files. Configuring the web API in your web application How does the web server know what to send to the browser when a specific URL is requested? The answer lies in the configuration of your web API project. Setting up dependencies In this section, we will learn how to set up your dependencies automatically using the IDE, or manually by editing your project's configuration file. To pull in the necessary dependencies, you may right-click on the using statement for Microsoft.AspNet.Mvc and select Quick Actions and Refactorings…. This can also be triggered by pressing Ctrl +. (period) on your keyboard or simply by hovering over the underlined term, as shown in the following screenshot: Visual Studio should offer you several possible options, fromwhich you can select the one that adds the package Microsoft.AspNetCore.Mvc.Corefor the namespace Microsoft.AspNetCore.Mvc. For the Controller class, add a reference for the Microsoft.AspNetCore.Mvc.ViewFeaturespackage, as shown in the following screenshot: Fig12: Adding the Microsoft.AspNetCore.Mvc.Core 1.0.0 package If you select the latest version that's available, this should update your references and remove the red squiggly lines, as shown in the following screenshot: Fig13:Updating your references and removing the red squiggly lines The precedingstep should automatically update your project.json file with the correct dependencies for theMicrosoft.AspNetCore.Mvc.Core, and Microsoft.AspNetCore.Mvc.ViewFeatures, as shown in the following screenshot: The "frameworks" section of theproject.json file identifies the type and version of the .NET Framework that your web app is using, for examplenetcoreapp1.0 for the 1.0 version of .NET Core. You will see something similar in your project, as shown in the following screenshot: Click the Build Solution button from the top menu/toolbar. Depending on how you have your shortcuts set up, you may press Ctrl+Shift+B or press F6 on your keyboard to build the solution. You should now be able to build your project/solution without errors, as shown in the following screenshot: Before running the web API project, open the Startup.cs class file, and replace the app.Run() statement/block (along with its contents) with a call to app.UseMvc()in the Configure() method. To add the Mvc to the project, add a call to the services.AddMvcCore() in the ConfigureServices() method. To allow this code to compile, add a reference to Microsoft.AspNetCore.Mvc. Parts of a web API project Let's take a closer look at the PatientController class. The auto-generated class has the following methods: public IEnumerable<string> Get() public string Get(int id) public void Post([FromBody]string value) public void Put(int id, [FromBody]string value) public void Delete(int id) The Get() method simply returns a JSON object as an enumerable string of values, while the Get(int id) method is an overridden variant that gets a particular value for a specified ID. The Post() and Put() methods can be used for creating and updating entities. Note that the Put() method takes in an ID value as the first parameter so that it knows which entity to update. Finally, we have the Delete() method, which can be used to delete an entity using the specified ID. Running the web API project You may run the web API project in a web browser that can display JSON data. If you use Google Chrome, I would suggest using the JSONView Extension (or other similar extension) to properly display JSON data. The aforementioned extension is also available on GitHub at the following URL: https://github.com/gildas-lormeau/JSONView-for-Chrome If you use Microsoft Edge, you can view the raw JSON data directly in the browser.Once your browser is ready, you can select your browser of choice from the top toolbar of Visual Studio. Click on the tiny triangle icon next to the Debug button, then select a browser, as shown in the following screenshot: In the preceding screenshot, you can see that multiple installed browsers are available, including Firefox, Google Chrome, Internet Explorer,and Edge. To choose a different browser, simply click on Browse With…, in the menu to select a different one. Now, click the Debug button (that isthe green play button) to see the web API project in action in your web browser, as shown in the following screenshot. If you don't have a web application set up, you won't be able to browse the site from the root URL: Don’t worry if you see this error, you can update the URL to include a path to your API controller, for an example seehttp://localhost:12345/api/Patient. Note that your port number may vary. Now, you should be able to see a list of views that are being spat out by your API controller, as shown in the following screenshot: Adding routes to handle anticipated URL paths Back in the days of classic ASP, application URL paths typically reflected physical file paths. This continued with ASP.NET web forms, even though the concept of custom URL routing was introduced. With ASP.NET MVC, routes were designed to cater to functionality rather than physical paths. ASP.NET web API continues this newer tradition, with the ability to set up custom routes from within your code. You can create routes for your application using fluent configuration in your startup code or with declarative attributes surrounded by square brackets. Understanding routes To understand the purpose of having routes, let's focus on the features and benefits of routes in your application. This applies to both ASP.NET MVC and ASP.NET web API: By defining routes, you can introduce predictable patterns for URL access This gives you more control over how URLs are mapped to your controllers Human-readable route paths are also SEO-friendly, which is great for Search Engine Optimization It provides some level of obscurity when it comes to revealing the underlying web technology and physical file names in your system Setting up routes Let's start with this simple class-level attribute that specifies a route for your API controller, as follows: [Route("api/[controller]")] public class PatientController : Controller { // ... } Here, we can dissect the attribute (seen in square brackets, used to affect the class below it) and its parameter to understand what's going on: The Route attribute indicates that we are going to define a route for this controller. Within the parentheses that follow, the route path is defined in double quotes. The first part of this path is thestring literal api/, which declares that the path to an API method call will begin with the term api followed by a forward slash. The rest of the path is the word controller in square brackets, which refers to the controller name. By convention, the controller's name is part of the controller's class name that precedes the term Controller. For a class PatientController, the controller name is just the word Patient. This means that all API methods for this controller can be accessed using the following syntax, where MyApplicationServer should be replaced with your own server or domain name:http://MyApplicationServer/api/Patient For method calls, you can define a route with or without parameters. The following two examples illustrate both types of route definitions: [HttpGet] public IEnumerable<string> Get() {     return new string[] { "value1", "value2" }; } In this example, the Get() method performs an action related to the HTTP verb HttpGet, which is declared in the attribute directly above the method. This identifies the default method for accessing the controller through a browser without any parameters, which means that this API method can be accessed using the following syntax: http://MyApplicationServer/api/Patient To include parameters, we can use the following syntax: [HttpGet("{id}")] public string Get(int id) {     return "value"; } Here, the HttpGet attribute is coupled with an "{id}" parameter, enclosed in curly braces within double quotes. The overridden version of the Get() method also includes an integer value named id to correspond with the expected parameter. If no parameter is specified, the value of id is equal to default(int) which is zero. This can be called without any parameters with the following syntax: http://MyApplicationServer/api/Patient/Get In order to pass parameters, you can add any integer value right after the controller name, with the following syntax: http://MyApplicationServer/api/Patient/1 This will assign the number 1 to the integer variable id. Testing routes To test the aforementioned routes, simply run the application from Visual Studio and access the specified URLs without parameters. The preceding screenshot show the results of accessing the following path: http://MyApplicationServer/api/Patient/1 Consuming a web API from a client application If a web API exposes public endpoints, but there is no client application there to consume it, does it really exist? Without getting too philosophical, let's go over the possible ways you can consume a client application. You can do any of the following: Consume the Web API using external tools Consume the Web API with a mobile app Consume the Web API with a web client Testing with external tools If you don't have a client application set up, you can use an external tool such as Fiddler. Fiddler is a free tool that is now available from Telerik, available at http://www.telerik.com/download/fiddler, as shown in the following screenshot: You can use Fiddler to inspect URLs that are being retrieved and submitted on your machine. You can also use it to trigger any URL, and change the request type (Get, Post, and others). Consuming a web API from a mobile app Since this article is primarily about the ASP.NET core web API, we won't go into detail about mobile application development. However, it's important to note that a web API can provide a backend for your mobile app projects. Mobile apps may include Windows Mobile apps, iOS apps, Android apps, and any modern app that you can build for today's smartphones and tablets. You may consult the documentation for your particular platform of choice, to determine what is needed to call a RESTful API. Consuming a web API from a web client A web client, in this case, refers to any HTML/JavaScript application that has the ability to call a RESTful API. At the least, you can build a complete client-side solution with straight JavaScript to perform the necessary actions. For a better experience, you may use jQuery and also one of many popular JavaScript frameworks. A web client can also be a part of a larger ASP.NET MVC application or a Single-Page Application (SPA). As long as your application is spitting out JavaScript that is contained in HTML pages, you can build a frontend that works with your backend web API. Summary In this article, we've taken a look at the basic structure of an ASP.NET web API project, and observed the unification of web API with MVC in an ASP.NET core. We also learned how to use a web API as our backend to provide support for various frontend applications. Resources for Article:   Further resources on this subject: Introducing IoT with Particle's Photon and Electron [article] Schema Validation with Oracle JDeveloper - XDK 11g [article] Getting Started with Spring Security [article]
Read more
  • 0
  • 0
  • 13799

article-image-python-multimedia-animation-examples-using-pyglet
Packt
31 Aug 2010
7 min read
Save for later

Python Multimedia: Animation Examples using Pyglet

Packt
31 Aug 2010
7 min read
(For more resources on Python, see here.) Single image animation Imagine that you are creating a cartoon movie where you want to animate the motion of an arrow or a bullet hitting a target. In such cases, typically it is just a single image. The desired animation effect is accomplished by performing appropriate translation or rotation of the image. Time for action – bouncing ball animation Lets create a simple animation of a 'bouncing ball'. We will use a single image file, ball.png, which can be downloaded from the Packt website. The dimensions of this image in pixels are 200x200, created on a transparent background. The following screenshot shows this image opened in GIMP image editor. The three dots on the ball identify its side. We will see why this is needed. Imagine this as a ball used in a bowling game. The image of a ball opened in GIMP appears as shown in the preceding image. The ball size in pixels is 200x200. Download the files SingleImageAnimation.py and ball.png from the Packt website. Place the ball.png file in a sub-directory 'images' within the directory in which SingleImageAnimation.py is saved. The following code snippet shows the overall structure of the code. 1 import pyglet2 import time34 class SingleImageAnimation(pyglet.window.Window):5 def __init__(self, width=600, height=600):6 pass7 def createDrawableObjects(self):8 pass9 def adjustWindowSize(self):10 pass11 def moveObjects(self, t):12 pass13 def on_draw(self):14 pass15 win = SingleImageAnimation()16 # Set window background color to gray.17 pyglet.gl.glClearColor(0.5, 0.5, 0.5, 1)1819 pyglet.clock.schedule_interval(win.moveObjects, 1.0/20)2021 pyglet.app.run() Although it is not required, we will encapsulate event handling and other functionality within a class SingleImageAnimation. The program to be developed is short, but in general, it is a good coding practice. It will also be good for any future extension to the code. An instance of SingleImageAnimation is created on line 14. This class is inherited from pyglet.window.Window. It encapsulates the functionality we need here. The API method on_draw is overridden by the class. on_draw is called when the window needs to be redrawn. Note that we no longer need a decorator statement such as @win.event above the on_draw method because the window API method is simply overridden by this inherited class. The constructor of the class SingleImageAnimation is as follows: 1 def __init__(self, width=None, height=None):2 pyglet.window.Window.__init__(self,3 width=width,4 height=height,5 resizable = True)6 self.drawableObjects = []7 self.rising = False8 self.ballSprite = None9 self.createDrawableObjects()10 self.adjustWindowSize() As mentioned earlier, the class SingleImageAnimation inherits pyglet.window.Window. However, its constructor doesn't take all the arguments supported by its super class. This is because we don't need to change most of the default argument values. If you want to extend this application further and need these arguments, you can do so by adding them as __init__ arguments. The constructor initializes some instance variables and then calls methods to create the animation sprite and resize the window respectively. The method createDrawableObjects creates a sprite instance using the ball.png image. 1 def createDrawableObjects(self):2 """3 Create sprite objects that will be drawn within the4 window.5 """6 ball_img= pyglet.image.load('images/ball.png')7 ball_img.anchor_x = ball_img.width / 28 ball_img.anchor_y = ball_img.height / 2910 self.ballSprite = pyglet.sprite.Sprite(ball_img)11 self.ballSprite.position = (12 self.ballSprite.width + 100,13 self.ballSprite.height*2 - 50)14 self.drawableObjects.append(self.ballSprite) The anchor_x and anchor_y properties of the image instance are set such that the image has an anchor exactly at its center. This will be useful while rotating the image later. On line 10, the sprite instance self.ballSprite is created. Later, we will be setting the width and height of the Pyglet window as twice of the sprite width and thrice of the sprite height. The position of the image within the window is set on line 11. The initial position is chosen as shown in the next screenshot. In this case, there is only one Sprite instance. However, to make the program more general, a list of drawable objects called self.drawableObjects is maintained. To continue the discussion from the previous step, we will now review the on_draw method. def on_draw(self): self.clear() for d in self.drawableObjects: d.draw() As mentioned previously, the on_draw function is an API method of class pyglet.window.Window that is called when a window needs to be redrawn. This method is overridden here. The self.clear() call clears the previously drawn contents within the window. Then, all the Sprite objects in the list self.drawableObjects are drawn in the for loop. The preceding image illustrates the initial ball position in the animation. The method adjustWindowSize sets the width and height parameters of the Pyglet window. The code is self-explanatory: def adjustWindowSize(self): w = self.ballSprite.width * 3 h = self.ballSprite.height * 3 self.width = w self.height = h So far, we have set up everything for the animation to play. Now comes the fun part. We will change the position of the sprite representing the image to achieve the animation effect. During the animation, the image will also be rotated, to give it the natural feel of a bouncing ball. 1 def moveObjects(self, t):2 if self.ballSprite.y - 100 < 0:3 self.rising = True4 elif self.ballSprite.y > self.ballSprite.height*2 - 50:5 self.rising = False67 if not self.rising:8 self.ballSprite.y -= 59 self.ballSprite.rotation -= 610 else:11 self.ballSprite.y += 512 self.ballSprite.rotation += 5 This method is scheduled to be called 20 times per second using the following code in the program. pyglet.clock.schedule_interval(win.moveObjects, 1.0/20) To start with, the ball is placed near the top. The animation should be such that it gradually falls down, hits the bottom, and bounces back. After this, it continues its upward journey to hit a boundary somewhere near the top and again it begins its downward journey. The code block from lines 2 to 5 checks the current y position of self.ballSprite. If it has hit the upward limit, the flag self.rising is set to False. Likewise, when the lower limit is hit, the flag is set to True. The flag is then used by the next code snippet to increment or decrement the y position of self.ballSprite. The highlighted lines of code rotate the Sprite instance. The current rotation angle is incremented or decremented by the given value. This is the reason why we set the image anchors, anchor_x and anchor_y at the center of the image. The Sprite object honors these image anchors. If the anchors are not set this way, the ball will be seen wobbling in the resultant animation. Once all the pieces are in place, run the program from the command line as: $python SingleImageAnimation.py This will pop up a window that will play the bouncing ball animation. The next illustration shows some intermediate frames from the animation while the ball is falling down. What just happened? We learned how to create an animation using just a single image. The image of a ball was represented by a sprite instance. This sprite was then translated and rotated on the screen to accomplish a bouncing ball animation. The whole functionality, including the event handling, was encapsulated in the class SingleImageAnimation.
Read more
  • 0
  • 0
  • 13658

article-image-how-integrate-vtiger-crm-your-website
Packt
15 Jul 2011
5 min read
Save for later

How to Integrate vtiger CRM with your Website

Packt
15 Jul 2011
5 min read
  vtiger CRM Beginner's Guide Record and consolidate all your customer information with vtiger CRM    To go through the exercises in this article, you'll need basic knowledge of HTML. If you already understand basic web development concepts, then you'll also be well prepared to delve into vtiger CRM's API. The vtiger CRM API For you developers out there, all of the ins and outs of vtiger CRM's API are fully documented at http://api.vtiger.com. For those of you not familiar with API's, API stands for Application Programming Interface. It's an interface for computers rather than humans. What does the API do? To illustrate—you can access the human interface of vtiger CRM by logging in with your username and password. The screens that are shown to you with all of the buttons and links make up the human interface. An API, on the other hand, is an interface for other computers. Computers don't need the fancy stuff that we humans do in the interface—it's all text. What is the benefit of the API? With an API, vtiger allows other computer systems to inform it and also ask it questions. This makes everyone's life easier, especially if it means you don't have to type the same data twice into two systems. Here's an example. You have a website where people make sales inquiries and you capture that information as a sales lead. You might receive that information as an email. At that point you could just leave the data in your email and refer to it as needed (which many people still do) or you could enter it into a CRM tool like vtiger so you can keep track of your leads. You can take it one step further by using vtiger's API. You can tell your website how to talk to vtiger's API and now your website can send the leads directly into vtiger, and...Voila! When you log in, the person who just made an inquiry on your website is now a lead in vtiger. Sending a lead into vtiger CRM from your website Well, what are we waiting for?! Let's give it a try. There is a plugin/extension in vtiger called Webforms and it uses the vtiger API to get data into vtiger. In the following exercises, we're going to: Configure the Webforms plugin Create a webform on your company website IMPORTANT NOTE: If you want to be able to send leads into vtiger from your website, your vtiger installation must be accessible on the Internet. If you have installed vtiger on a computer or server on your internal network, then you won't be able to send leads into vtiger from your website, because your website won't be able to connect with the computer/server that vtiger is running on. Time for action – configuring the Webforms plugin OK, let's roll up our sleeves and get ready to do a little code editing. Let's take a look first: Let's navigate to the Webforms configuration file in vtigercrm/modules/Webforms/Webforms.config.php Let's open it up with a text editor like Notepad. Here's what it might look like by default: <?php /*+************************************************************** ******************* * The contents of this file are subject to the vtiger CRM Public License Version 1.0 * ("License"); You may not use this file except in compliance with the License * The Original Code is: vtiger CRM Open Source * The Initial Developer of the Original Code is vtiger. * Portions created by vtiger are Copyright (C) vtiger. * All Rights Reserved. ***************************************************************** *******************/ $enableAppKeyValidation = true; $defaultUserName = 'admin'; $defaultUserAccessKey = 'iFOdqrI8lS5UhNTa'; $defaultOwner = 'admin'; $successURL = ''; $failureURL = ''; /** * JSON or HTML. if incase success and failure URL is NOT specified. */ $defaultSuccessAction = 'HTML'; $defaultSuccessMessage = 'LBL_SUCCESS'; ?> We have to be concerned with several lines here. Specifically, they're the ones that contain the following: $defaultUserName: This will most likely be the admin user, although it can be any user that you create in your vtiger CRM system. $defaultUserAccessKey: This key is used for authentication when your website will access vtiger's API. You can access this key by logging in to vtiger and clicking on the My Preferences link at the top right. It needs to be the key for the username assigned to the $defaultUserName variable. $defaultOwner: This user will be assigned all of the new leads created by this form by default. $successURL: If the lead submission is successful, this is the URL to which you want to send the user after they entered their information. This would typically be a web page that would thank the user for their submission and provide any additional sales information. $failureURL: This is the URL which you want to send the user if the submission fails. This would typically be a web page that would say something like, "We apologize, but something has gone wrong. Please try again". Now you'll need to fill in the values with the information from our own installation of vtiger CRM. Save the Webforms.config.php and close it. We've finished completing the configuration of the Webforms module. What just happened? We configured the Webforms module in vtiger CRM. We modified the Webform plugin's configuration file, Webforms.config.php. Now the Webforms module will: Be able to authenticate lead submissions that come from your website Assign all new leads to the admin user by default (you'll be able to change this) Send the user to a thank you page, should the lead submission into vtiger succeed Send the user to an "Oops" page, should the lead submission into vtiger fail  
Read more
  • 0
  • 0
  • 13632
article-image-data-access-layer
Packt
09 Nov 2016
13 min read
Save for later

Data Access Layer

Packt
09 Nov 2016
13 min read
In this article by Alexander Zaytsev, author of NHibernate 4.0 Cookbook, we will cover the following topics: Transaction Auto-wrapping for the data access layer Setting up an NHibernate repository Using Named Queries in the data access layer (For more resources related to this topic, see here.) Introduction There are two styles of data access layer common in today's applications. Repositories and Data Access Objects. In reality, the distinction between these two have become quite blurred, but in theory, it's something like this: A repository should act like an in-memory collection. Entities are added to and removed from the collection, and its contents can be enumerated. Queries are typically handled by sending query specifications to the repository. A DAO (Data Access Object) is simply an abstraction of an application's data access. Its purpose is to hide the implementation details of the database access, from the consuming code. The first recipe shows the beginnings of a typical data access object. The remaining recipes show how to set up a repository-based data access layer with NHibernate's various APIs. Transaction Auto-wrapping for the data access layer In this recipe, we'll show you how we can set up the data access layer to wrap all data access in NHibernate transactions automatically. How to do it... Create a new class library named Eg.Core.Data. Install NHibernate to Eg.Core.Data using NuGet Package Manager Console. Add the following two DOA classes: public class DataAccessObject<T, TId> where T : Entity<TId> { private readonly ISessionFactory _sessionFactory; private ISession session { get { return _sessionFactory.GetCurrentSession(); } } public DataAccessObject(ISessionFactory sessionFactory) { _sessionFactory = sessionFactory; } public T Get(TId id) { return WithinTransaction(() => session.Get<T>(id)); } public T Load(TId id) { return WithinTransaction(() => session.Load<T>(id)); } public void Save(T entity) { WithinTransaction(() => session.SaveOrUpdate(entity)); } public void Delete(T entity) { WithinTransaction(() => session.Delete(entity)); } private TResult WithinTransaction<TResult>(Func<TResult> func) { if (!session.Transaction.IsActive) { // Wrap in transaction TResult result; using (var tx = session.BeginTransaction()) { result = func.Invoke(); tx.Commit(); } return result; } // Don't wrap; return func.Invoke(); } private void WithinTransaction(Action action) { WithinTransaction<bool>(() => { action.Invoke(); return false; }); } } public class DataAccessObject<T> : DataAccessObject<T, Guid> where T : Entity { } How it works... NHibernate requires that all data access occurs inside an NHibernate transaction. Remember, the ambient transaction created by TransactionScope is not a substitute for an NHibernate transaction This recipe, however, shows a more explicit approach. To ensure that at least all our data access layer calls are wrapped in transactions, we create a private WithinTransaction method that accepts a delegate, consisting of some data access methods, such as session.Save or session.Get. This WithinTransaction method first checks if the session has an active transaction. If it does, the delegate is invoked immediately. If it doesn't, a new NHibernate transaction is created, the delegate is invoked, and finally the transaction is committed. If the data access method throws an exception, the transaction will be rolled back automatically as the exception bubbles up to the using block. There's more... This transactional auto-wrapping can also be set up using SessionWrapper from the unofficial NHibernate AddIns project at https://bitbucket.org/fabiomaulo/unhaddins. This class wraps a standard NHibernate session. By default, it will throw an exception when the session is used without an NHibernate transaction. However, it can be configured to check for and create a transaction automatically, much in the same way I've shown you here. See also Setting up an NHibernate repository Setting up an NHibernate Repository Many developers prefer the repository pattern over data access objects. In this recipe, we'll show you how to set up the repository pattern with NHibernate. How to do it... Create a new, empty class library project named Eg.Core.Data. Add a reference to Eg.Core project. Add the following IRepository interface: public interface IRepository<T>: IEnumerable<T> where T : Entity { void Add(T item); bool Contains(T item); int Count { get; } bool Remove(T item); } Create a new, empty class library project named Eg.Core.Data.Impl. Add references to the Eg.Core and Eg.Core.Data projects. Add a new abstract class named NHibernateBase using the following code: protected readonly ISessionFactory _sessionFactory; protected virtual ISession session { get { return _sessionFactory.GetCurrentSession(); } } public NHibernateBase(ISessionFactory sessionFactory) { _sessionFactory = sessionFactory; } protected virtual TResult WithinTransaction<TResult>( Func<TResult> func) { if (!session.Transaction.IsActive) { // Wrap in transaction TResult result; using (var tx = session.BeginTransaction()) { result = func.Invoke(); tx.Commit(); } return result; } // Don't wrap; return func.Invoke(); } protected virtual void WithinTransaction(Action action) { WithinTransaction<bool>(() => { action.Invoke(); return false; }); } Add a new class named NHibernateRepository using the following code: public class NHibernateRepository<T> : NHibernateBase, IRepository<T> where T : Entity { public NHibernateRepository( ISessionFactory sessionFactory) : base(sessionFactory) { } public void Add(T item) { WithinTransaction(() => session.Save(item)); } public bool Contains(T item) { if (item.Id == default(Guid)) return false; return WithinTransaction(() => session.Get<T>(item.Id)) != null; } public int Count { get { return WithinTransaction(() => session.Query<T>().Count()); } } public bool Remove(T item) { WithinTransaction(() => session.Delete(item)); return true; } public IEnumerator<T> GetEnumerator() { return WithinTransaction(() => session.Query<T>() .Take(1000).GetEnumerator()); } IEnumerator IEnumerable.GetEnumerator() { return WithinTransaction(() => GetEnumerator()); } } How it works... The repository pattern, as explained in http://martinfowler.com/eaaCatalog/repository.html, has two key features: It behaves as an in-memory collection Query specifications are submitted to the repository for satisfaction. In this recipe, we are concerned only with the first feature, behaving as an in-memory collection. The remaining recipes in this article will build on this base, and show various methods for satisfying the second point. Because our repository should act like an in-memory collection, it makes sense that our IRepository<T> interface should resemble ICollection<T>. Our NHibernateBase class provides both contextual session management and the automatic transaction wrapping explained in the previous recipe. NHibernateRepository simply implements the members of IRepository<T>. There's more... The Repository pattern reduces data access to its absolute simplest form, but this simplification comes with a price. We lose much of the power of NHibernate behind an abstraction layer. Our application must either do without even basic session methods like Merge, Refresh, and Load, or allow them to leak through the abstraction. See also Transaction Auto-wrapping for the data access layer Using Named Queries in the data access layer Using Named Queries in the data access layer Named Queries encapsulated in query objects is a powerful combination. In this recipe, we'll show you how to use Named Queries with your data access layer. Getting ready To complete this recipe you will need Common Service Locator from Microsoft Patterns & Practices. The documentation and source code could be found at http://commonservicelocator.codeplex.com. Complete the previous recipe Setting up an NHibernate repository. Include the Eg.Core.Data.Impl assembly as an additional mapping assembly in your test project's App.Config with the following xml: <mapping assembly="Eg.Core.Data.Impl"/> How to do it... In the Eg.Core.Data project, add a folder for the Queries namespace. Add the following IQuery interfaces: public interface IQuery { } public interface IQuery<TResult> : IQuery { TResult Execute(); } Add the following IQueryFactory interface: public interface IQueryFactory { TQuery CreateQuery<TQuery>() where TQuery :IQuery; } Change the IRepository interface to implement the IQueryFactory interface, as shown in the following code: public interface IRepository<T> : IEnumerable<T>, IQueryFactory where T : Entity { void Add(T item); bool Contains(T item); int Count { get; } bool Remove(T item); } In the Eg.Core.Data.Impl project, change the NHibernateRepository constructor and add the _queryFactory field, as shown in the following code: private readonly IQueryFactory _queryFactory; public NHibernateRepository( ISessionFactory sessionFactory, IQueryFactory queryFactory) : base(sessionFactory) { _queryFactory = queryFactory; } Add the following method to NHibernateRepository: public TQuery CreateQuery<TQuery>() where TQuery : IQuery { return _queryFactory.CreateQuery<TQuery>(); } In the Eg.Core.Data.Impl project, add a folder for the Queries namespace. Install Common Service Locator using NuGet Package Manager Console, using the command. Install-Package CommonServiceLocator To the Queries namespace, add this QueryFactory class: public class QueryFactory : IQueryFactory { private readonly IServiceLocator _serviceLocator; public QueryFactory(IServiceLocator serviceLocator) { _serviceLocator = serviceLocator; } public TQuery CreateQuery<TQuery>() where TQuery : IQuery { return _serviceLocator.GetInstance<TQuery>(); } } Add the following NHibernateQueryBase class: public abstract class NHibernateQueryBase<TResult> : NHibernateBase, IQuery<TResult> { protected NHibernateQueryBase( ISessionFactory sessionFactory) : base(sessionFactory) { } public abstract TResult Execute(); } Add an empty INamedQuery interface, as shown in the following code: public interface INamedQuery { string QueryName { get; } } Add a NamedQueryBase class, as shown in the following code: public abstract class NamedQueryBase<TResult> : NHibernateQueryBase<TResult>, INamedQuery { protected NamedQueryBase(ISessionFactory sessionFactory) : base(sessionFactory) { } public override TResult Execute() { var nhQuery = GetNamedQuery(); return Transact(() => Execute(nhQuery)); } protected abstract TResult Execute(IQuery query); protected virtual IQuery GetNamedQuery() { var nhQuery = session.GetNamedQuery(QueryName); SetParameters(nhQuery); return nhQuery; } protected abstract void SetParameters(IQuery nhQuery); public virtual string QueryName { get { return GetType().Name; } } } In Eg.Core.Data.Impl.Test, add a test fixture named QueryTests inherited from NHibernateFixture. Add the following test and three helper methods: [Test] public void NamedQueryCheck() { var errors = new StringBuilder(); var queryObjectTypes = GetNamedQueryObjectTypes(); var mappedQueries = GetNamedQueryNames(); foreach (var queryType in queryObjectTypes) { var query = GetQuery(queryType); if (!mappedQueries.Contains(query.QueryName)) { errors.AppendFormat( "Query object {0} references non-existent " + "named query {1}.", queryType, query.QueryName); errors.AppendLine(); } } if (errors.Length != 0) Assert.Fail(errors.ToString()); } private IEnumerable<Type> GetNamedQueryObjectTypes() { var namedQueryType = typeof(INamedQuery); var queryImplAssembly = typeof(BookWithISBN).Assembly; var types = from t in queryImplAssembly.GetTypes() where namedQueryType.IsAssignableFrom(t) && t.IsClass && !t.IsAbstract select t; return types; } private IEnumerable<string> GetNamedQueryNames() { var nhCfg = NHConfigurator.Configuration; var mappedQueries = nhCfg.NamedQueries.Keys .Union(nhCfg.NamedSQLQueries.Keys); return mappedQueries; } private INamedQuery GetQuery(Type queryType) { return (INamedQuery) Activator.CreateInstance( queryType, new object[] { SessionFactory }); } For our example query, in the Queries namespace of Eg.Core.Data, add the following interface: public interface IBookWithISBN : IQuery<Book> { string ISBN { get; set; } } Add the implementation to the Queries namespace of Eg.Core.Data.Impl using the following code: public class BookWithISBN : NamedQueryBase<Book>, IBookWithISBN { public BookWithISBN(ISessionFactory sessionFactory) : base(sessionFactory) { } public string ISBN { get; set; } protected override void SetParameters( NHibernate.IQuery nhQuery) { nhQuery.SetParameter("isbn", ISBN); } protected override Book Execute(NHibernate.IQuery query) { return query.UniqueResult<Book>(); } } Finally, add the embedded resource mapping, BookWithISBN.hbm.xml, to Eg.Core.Data.Impl with the following xml code: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping > <query name="BookWithISBN"> <![CDATA[ from Book b where b.ISBN = :isbn ]]> </query> </hibernate-mapping> How it works... As we learned in the previous recipe, according to the repository pattern, the repository is responsible for fulfilling queries, based on the specifications submitted to it. These specifications are limiting. They only concern themselves with whether a particular item matches the given criteria. They don't care for other necessary technical details, such as eager loading of children, batching, query caching, and so on. We need something more powerful than simple where clauses. We lose too much to the abstraction. The query object pattern defines a query object as a group of criteria that can self-organize in to a SQL query. The query object is not responsible for the execution of this SQL. This is handled elsewhere, by some generic query runner, perhaps inside the repository. While a query object can better express the different technical requirements, such as eager loading, batching, and query caching, a generic query runner can't easily implement those concerns for every possible query, especially across the half-dozen query APIs provided by NHibernate. These details about the execution are specific to each query, and should be handled by the query object. This enhanced query object pattern, as Fabio Maulo has named it, not only self-organizes into SQL but also executes the query, returning the results. In this way, the technical concerns of a query's execution are defined and cared for with the query itself, rather than spreading into some highly complex, generic query runner. According to the abstraction we've built, the repository represents the collection of entities that we are querying. Since the two are already logically linked, if we allow the repository to build the query objects, we can add some context to our code. For example, suppose we have an application service that runs product queries. When we inject dependencies, we could specify IQueryFactory directly. This doesn't give us much information beyond "This service runs queries." If, however, we inject IRepository<Product>, we have a much better idea about what data the service is using. The IQuery interface is simply a marker interface for our query objects. Besides advertising the purpose of our query objects, it allows us to easily identify them with reflection. The IQuery<TResult> interface is implemented by each query object. It specifies only the return type and a single method to execute the query. The IQueryFactory interface defines a service to create query objects. For the purpose of explanation, the implementation of this service, QueryFactory, is a simple service locator. IQueryFactory is used internally by the repository to instantiate query objects. The NamedQueryBase class handles most of the plumbing for query objects, based on named HQL and SQL queries. As a convention, the name of the query is the name of the query object type. That is, the underlying named query for BookWithISBN is also named BookWithISBN. Each individual query object must simply implement SetParameters and Execute(NHibernate.IQuery query), which usually consists of a simple call to query.List<SomeEntity>() or query.UniqueResult<SomeEntity>(). The INamedQuery interface both identifies the query objects based on Named Queries, and provides access to the query name. The NamedQueryCheck test uses this to verify that each INamedQuery query object has a matching named query. Each query has an interface. This interface is used to request the query object from the repository. It also defines any parameters used in the query. In this example, IBookWithISBN has a single string parameter, ISBN. The implementation of this query object sets the :isbn parameter on the internal NHibernate query, executes it, and returns the matching Book object. Finally, we also create a mapping containing the named query BookWithISBN, which is loaded into the configuration with the rest of our mappings. The code used in the query object setup would look like the following code: var query = bookRepository.CreateQuery<IBookWithISBN>(); query.ISBN = "12345"; var book = query.Execute(); See also Transaction Auto-wrapping for the data access layer Setting up an NHibernate repository Summary In this article we learned how to transact Auto-wrapping for the data access layer, setting up an NHibernate repository and how to use Named Queries in the data access layer Resources for Article: Further resources on this subject: Memory Management [article] Getting Started with Spring Security [article] Design with Spring AOP [article]
Read more
  • 0
  • 0
  • 13630

article-image-introducing-boost-c-libraries
Packt
14 Sep 2015
22 min read
Save for later

Introducing the Boost C++ Libraries

Packt
14 Sep 2015
22 min read
 In this article written by John Torjo and Wisnu Anggoro, authors of the book Boost.Asio C++ Network Programming - Second Edition, the authors state that "Many programmers have used libraries since this simplifies the programming process. Because they do not need to write the function from scratch anymore, using a library can save much code development time". In this article, we are going to get acquainted with Boost C++ libraries. Let us prepare our own compiler and text editor to prove the power of Boost libraries. As we do so, we will discuss the following topics: Introducing the C++ standard template library Introducing Boost C++ libraries Setting up Boost C++ libraries in MinGW compiler Building Boost C++ libraries Compiling code that contains Boost C++ libraries (For more resources related to this topic, see here.) Introducing the C++ standard template library The C++ Standard Template Library (STL) is a generic template-based library that offers generic containers among other things. Instead of dealing with dynamic arrays, linked lists, binary trees, or hash tables, programmers can easily use an algorithm that is provided by STL. The STL is structured by containers, iterators, and algorithms, and their roles are as follows: Containers: Their main role is to manage the collection of objects of certain kinds, such as arrays of integers or linked lists of strings. Iterators: Their main role is to step through the element of the collections. The working of an iterator is similar to that of a pointer. We can increment the iterator by using the ++ operator and access the value by using the * operator. Algorithms: Their main role is to process the element of collections. An algorithm uses an iterator to step through all elements. After it iterates the elements, it processes each element, for example, modifying the element. It can also search and sort the element after it finishes iterating all the elements. Let us examine the three elements that structure STL by creating the following code: /* stl.cpp */ #include <vector> #include <iostream> #include <algorithm> int main(void) { int temp; std::vector<int> collection; std::cout << "Please input the collection of integer numbers, input 0 to STOP!n"; while(std::cin >> temp != 0) { if(temp == 0) break; collection.push_back(temp); } std::sort(collection.begin(), collection.end()); std::cout << "nThe sort collection of your integer numbers:n"; for(int i: collection) { std::cout << i << std::endl; } } Name the preceding code stl.cpp, and run the following command to compile it: g++ -Wall -ansi -std=c++11 stl.cpp -o stl Before we dissect this code, let us run it to see what happens. This program will ask users to enter as many as integer, and then it will sort the numbers. To stop the input and ask the program to start sorting, the user has to input 0. This means that 0 will not be included in the sorting process. Since we do not prevent users from entering non-integer numbers such as 3.14, or even the string, the program will soon stop waiting for the next number after the user enters a non-integer number. The code yields the following output: We have entered six integer: 43, 7, 568, 91, 2240, and 56. The last entry is 0 to stop the input process. Then the program starts to sort the numbers and we get the numbers sorted in sequential order: 7, 43, 56, 91, 568, and 2240. Now, let us examine our code to identify the containers, iterators, and algorithms that are contained in the STL. std::vector<int> collection; The preceding code snippet has containers from STL. There are several containers, and we use a vector in the code. A vector manages its elements in a dynamic array, and they can be accessed randomly and directly with the corresponding index. In our code, the container is prepared to hold integer numbers so we have to define the type of the value inside the angle brackets <int>. These angle brackets are also called generics in STL. collection.push_back(temp); std::sort(collection.begin(), collection.end()); The begin() and end() functions in the preceding code are algorithms in STL. They play the role of processing the data in the containers that are used to get the first and the last elements in the container. Before that, we can see the push_back() function, which is used to append an element to the container. for(int i: collection) { std::cout << i << std::endl; } The preceding for block will iterate each element of the integer which is called as collection. Each time the element is iterated, we can process the element separately. In the preceding example, we showed the number to the user. That is how the iterators in STL play their role. #include <vector> #include <algorithm> We include vector definition to define all vector functions and algorithm definition to invoke the sort() function. Introducing the Boost C++ libraries The Boost C++ libraries is a set of libraries to complement the C++ standard libraries. The set contains more than a hundred libraries that we can use to increase our productivity in C++ programming. It is also used when our requirements go beyond what is available in the STL. It provides source code under Boost Licence, which means that it allows us to use, modify, and distribute the libraries for free, even for commercial use. The development of Boost is handled by the Boost community, which consists of C++ developers from around the world. The mission of the community is to develop high-quality libraries as a complement to STL. Only proven libraries will be added to the Boost libraries. For detailed information about Boost libraries go to www.boost.org. And if you want to contribute developing libraries to Boost, you can join the developer mailing list at lists.boost.org/mailman/listinfo.cgi/boost. The entire source code of the libraries is available on the official GitHub page at github.com/boostorg. Advantages of Boost libraries As we know, using Boost libraries will increase programmer productivity. Moreover, by using Boost libraries, we will get advantages such as these: It is open source, so we can inspect the source code and modify it if needed. Its license allows us to develop both open source and close source projects. It also allows us to commercialize our software freely. It is well documented and we can find it libraries all explained along with sample code from the official site. It supports almost any modern operating system, such as Windows and Linux. It also supports many popular compilers. It is a complement to STL instead of a replacement. It means using Boost libraries will ease those programming processes which are not handled by STL yet. In fact, many parts of Boost are included in the standard C++ library. Preparing Boost libraries for MinGW compiler Before we go through to program our C++ application by using Boost libraries, the libraries need to be configured in order to be recognized by MinGW compiler. Here we are going to prepare our programming environment so that our compiler is able use Boost libraries. Downloading Boost libraries The best source from which to download Boost is the official download page. We can go there by pointing our internet browser to www.boost.org/users/download. Find the Download link in Current Release section. At the time of writing, the current version of Boost libraries is 1.58.0, but when you read this article, the version may have changed. If so, you can still choose the current release because the higher version must be compatible with the lower. However, you have to adjust as we're goning to talk about the setting later. Otherwise, choosing the same version will make it easy for you to follow all the instructions in this article. There are four file formats to be choose from for download; they are .zip, .tar.gz, .tar.bz2, and .7z. There is no difference among the four files but their file size. The largest file size is of the ZIP format and the lowest is that of the 7Z format. Because of the file size, Boost recommends that we download the 7Z format. See the following image for comparison: We can see, from the preceding image, the size of ZIP version is 123.1 MB while the size of the 7Z version is 65.2 MB. It means that the size of the ZIP version is almost twice that of the 7Z version. Therefore they suggest that you choose the 7Z format to reduce download and decompression time. Let us choose boost_1_58_0.7z to be downloaded and save it to our local storage. Deploying Boost libraries After we have got boost_1_58_0.7z in our local storage, decompress it using the 7ZIP application and save the decompression files to C:boost_1_58_0. The 7ZIP application can be grabbed from www.7-zip.org/download.html. The directory then should contain file structures as follows: Instead of browsing to the Boost download page and searching for the Boost version manually, we can go directly to sourceforge.net/projects/boost/files/boost/1.58.0. It will be useful when the 1.58.0 version is not the current release anymore. Using Boost libraries Most libraries in Boost are header-only; this means that all declarations and definitions of functions, including namespaces and macros, are visible to the compiler and there is no need to compile them separately. We can now try to use Boost with the program to convert the string into int value as follows: /* lexical.cpp */ #include <boost/lexical_cast.hpp> #include <string> #include <iostream> int main(void) { try { std::string str; std::cout << "Please input first number: "; std::cin >> str; int n1 = boost::lexical_cast<int>(str); std::cout << "Please input second number: "; std::cin >> str; int n2 = boost::lexical_cast<int>(str); std::cout << "The sum of the two numbers is "; std::cout << n1 + n2 << "n"; return 0; } catch (const boost::bad_lexical_cast &e) { std::cerr << e.what() << "n"; return 1; } } Open the Notepad++ application, type the preceding code, and save it as lexical.cpp in C:CPP. Now open the command prompt, point the active directory to C:CPP, and then type the following command: g++ -Wall -ansi lexical.cpp –Ic:boost_1_58_0 -o lexical We have a new option here, which is –I (the "include" option). This option is used along with the full path of the directory to inform the compiler that we have another header directory that we want to include to our code. Since we store our Boost libraries in c: boost_1_58_0, we can use –Ic:boost_1_58_0 as an additional parameter. In lexical.cpp, we apply boost::lexical_cast to convert string type data into int type data. The program will ask the user to input two numbers and will then automatically find the sum of both numbers. If a user inputs an inappropriate number, it will inform that an error has occurred. The Boost.LexicalCast library is provided by Boost for casting data type purpose (converting numeric types such as int, double, or floats into string types, and vice versa). Now let us dissect lexical.cpp to for a more detailed understanding of what it does: #include <boost/lexical_cast.hpp> #include <string> #include <iostream> We include boost/lexical_cast.hpp because the boost::lexical_cast function is declared lexical_cast.hpp header file whilst string header is included to apply std::string function and iostream header is included to apply std::cin, std::cout and std::cerr function. Other functions, such as std::cin and std::cout, and we saw what their functions are so we can skip those lines. #include <boost/lexical_cast.hpp> #include <string> #include <iostream> We used the preceding two separate lines to convert the user-provided input string into the int data type. Then, after converting the data type, we summed up both of the int values. We can also see the try-catch block in the preceding code. It is used to catch the error if user inputs an inappropriate number, except 0 to 9. catch (const boost::bad_lexical_cast &e) { std::cerr << e.what() << "n"; return 1; } The preceding code snippet will catch errors and inform the user what the error message exactly is by using boost::bad_lexical_cast. We call the e.what() function to obtain the string of the error message. Now let us run the application by typing lexical at the command prompt. We will get output like the following: I put 10 for first input and 20 for the second input. The result is 30 because it just sums up both input. But what will happen if I put in a non-numerical value, for instance Packt. Here is the output to try that condition: Once the application found the error, it will ignore the next statement and go directly to the catch block. By using the e.what() function, the application can get the error message and show it to the user. In our example, we obtain bad lexical cast: source type value could not be interpreted as target as the error message because we try to assign the string data to int type variable. Building Boost libraries As we discussed previously, most libraries in Boost are header-only, but not all of them. There are some libraries that have to be built separately. They are: Boost.Chrono: This is used to show the variety of clocks, such as current time, the range between two times, or calculating the time passed in the process. Boost.Context: This is used to create higher-level abstractions, such as coroutines and cooperative threads. Boost.Filesystem: This is used to deal with files and directories, such as obtaining the file path or checking whether a file or directory exists. Boost.GraphParallel: This is an extension to the Boost Graph Library (BGL) for parallel and distributed computing. Boost.IOStreams: This is used to write and read data using stream. For instance, it loads the content of a file to memory or writes compressed data in GZIP format. Boost.Locale: This is used to localize the application, in other words, translate the application interface to user's language. Boost.MPI: This is used to develop a program that executes tasks concurrently. MPI itself stands for Message Passing Interface. Boost.ProgramOptions: This is used to parse command-line options. Instead of using the argv variable in the main parameter, it uses double minus (--) to separate each command-line option. Boost.Python: This is used to parse Python language in C++ code. Boost.Regex: This is used to apply regular expression in our code. But if our development supports C++11, we do not depend on the Boost.Regex library anymore since it is available in the regex header file. Boost.Serialization: This is used to convert objects into a series of bytes that can be saved and then restored again into the same object. Boost.Signals: This is used to create signals. The signal will trigger an event to run a function on it. Boost.System: This is used to define errors. It contains four classes: system::error_code, system::error_category, system::error_condition, and system::system_error. All of these classes are inside the boost namespace. It is also supported in the C++11 environment, but because many Boost libraries use Boost.System, it is necessary to keep including Boost.System. Boost.Thread: This is used to apply threading programming. It provides classes to synchronize access on multiple-thread data. It is also supported in C++11 environments, but it offers extensions, such as we can interrupt thread in Boost.Thread. Boost.Timer: This is used to measure the code performance by using clocks. It measures time passed based on usual clock and CPU time, which states how much time has been spent to execute the code. Boost.Wave: This provides a reusable C preprocessor that we can use in our C++ code. There are also a few libraries that have optional, separately compiled binaries. They are as follows: Boost.DateTime: It is used to process time data; for instance, calendar dates and time. It has a binary component that is only needed if we use to_string, from_string, or serialization features. It is also needed if we target our application in Visual C++ 6.x or Borland. Boost.Graph: It is used to create two-dimensional graphics. It has a binary component that is only needed if we intend to parse GraphViz files. Boost.Math: It is used to deal with mathematical formulas. It has binary components for cmath functions. Boost.Random: It is used to generate random numbers. It has a binary component which is only needed if we want to use random_device. Boost.Test: It is used to write and organize test programs and their runtime execution. It can be used in header-only or separately compiled mode, but separate compilation is recommended for serious use. Boost.Exception: It is used to add data to an exception after it has been thrown. It provides non-intrusive implementation of exception_ptr for 32-bit _MSC_VER==1310 and _MSC_VER==1400, which requires a separately compiled binary. This is enabled by #define BOOST_ENABLE_NON_INTRUSIVE_EXCEPTION_PTR. Let us try to recreate the random number generator. But now we will use the Boost.Random library instead of std::rand() from the C++ standard function. Let us take a look at the following code: /* rangen_boost.cpp */ #include <boost/random/mersenne_twister.hpp> #include <boost/random/uniform_int_distribution.hpp> #include <iostream> int main(void) { int guessNumber; std::cout << "Select number among 0 to 10: "; std::cin >> guessNumber; if(guessNumber < 0 || guessNumber > 10) { return 1; } boost::random::mt19937 rng; boost::random::uniform_int_distribution<> ten(0,10); int randomNumber = ten(rng); if(guessNumber == randomNumber) { std::cout << "Congratulation, " << guessNumber << " is your lucky number.n"; } else { std::cout << "Sorry, I'm thinking about number " << randomNumber << "n"; } return 0; } We can compile the preceding source code by using the following command: g++ -Wall -ansi -Ic:/boost_1_58_0 rangen_boost.cpp -o rangen_boost Now, let us run the program. Unfortunately, for the three times that I ran the program, I always obtained the same random number as follows: As we can see from this example, we always get number 8. This is because we apply Mersenne Twister, a Pseudorandom Number Generator (PRNG), which uses the default seed as a source of randomness so it will generate the same number every time the program is run. And of course it is not the program that we expect. Now, we will rework the program once again, just in two lines. First, find the following line: #include <boost/random/mersenne_twister.hpp> Change it as follows: #include <boost/random/random_device.hpp> Next, find the following line: boost::random::mt19937 rng; Change it as follows: boost::random::random_device rng; Then, save the file as rangen2_boost.cpp and compile the rangen2_boost.cpp file by using the command like we compiled rangen_boost.cpp. The command will look like this: g++ -Wall -ansi -Ic:/boost_1_58_0 rangen2_boost.cpp -o rangen2_boost Sadly, there will be something wrong and the compiler will show the following error message: cc8KWVvX.o:rangen2_boost.cpp:(.text$_ZN5boost6random6detail20generate _uniform_intINS0_13random_deviceEjEET0_RT_S4_S4_N4mpl_5bool_ILb1EEE[_ ZN5boost6random6detail20generate_uniform_intINS0_13random_deviceEjEET 0_RT_S4_S4_N4mpl_5bool_ILb1EEE]+0x24f): more undefined references to boost::random::random_device::operator()()' follow collect2.exe: error: ld returned 1 exit status This is because, as we have discussed earlier, the Boost.Random library needs to be compiled separately if we want to use the random_device attribute. Boost libraries have a system to compile or build Boost itself, called Boost.Build library. There are two steps we have to achieve to install the Boost.Build library. First, run Bootstrap by pointing the active directory at the command prompt to C:boost_1_58_0 and typing the following command: bootstrap.bat mingw We use our MinGW compiler, as our toolset in compiling the Boost library. Wait a second and then we will get the following output if the process is a success: Building Boost.Build engine Bootstrapping is done. To build, run: .b2 To adjust configuration, edit 'project-config.jam'. Further information: - Command line help: .b2 --help - Getting started guide: http://boost.org/more/getting_started/windows.html - Boost.Build documentation: http://www.boost.org/build/doc/html/index.html In this step, we will find four new files in the Boost library's root directory. They are: b2.exe: This is an executable file to build Boost libraries. bjam.exe: This is exactly the same as b2.exe but it is a legacy version. bootstrap.log: This contains logs from the bootstrap process project-config.jam: This contains a setting that will be used in the building process when we run b2.exe. We also find that this step creates a new directory in C:boost_1_58_0toolsbuildsrcenginebin.ntx86 , which contains a bunch of .obj files associated with Boost libraries that needed to be compiled. After that, run the second step by typing the following command at the command prompt: b2 install toolset=gcc Grab yourself a cup of coffee after running that command because it will take about twenty to fifty minutes to finish the process, depending on your system specifications. The last output we will get will be like this: ...updated 12562 targets... This means that the process is complete and we have now built the Boost libraries. If we check in our explorer, the Boost.Build library adds C:boost_1_58_0stagelib, which contains a collection of static and dynamic libraries that we can use directly in our program. bootstrap.bat and b2.exe use msvc (Microsoft Visual C++ compiler) as the default toolset, and many Windows developers already have msvc installed on their machines. Since we have installed GCC compiler, we set the mingw and gcc toolset options in Boost's build. If you also have mvsc installed and want to use it in Boost's build, the toolset options can be omitted. Now, let us try to compile the rangen2_boost.cpp file again, but now with the following command: c:CPP>g++ -Wall -ansi -Ic:/boost_1_58_0 rangen2_boost.cpp - Lc:boost_1_58_0stagelib -lboost_random-mgw49-mt-1_58 - lboost_system-mgw49-mt-1_58 -o rangen2_boost We have two new options here, they are –L and –l. The -L option is used to define the path that contains the library file if it is not in the active directory. The –l option is used to define the name of library file but omitting the first lib word in front of the file name. In this case, the original library file name is libboost_random-mgw49-mt-1_58.a, and we omit the lib phrase and the file extension for option -l. The new file called rangen2_boost.exe will be created in C:CPP. But before we can run the program, we have to ensure that the directory which the program installed has contained the two dependencies library file. These are libboost_random-mgw49-mt-1_58.dll and libboost_system-mgw49-mt-1_58.dll, and we can get them from the library directory c:boost_1_58_0_1stagelib. Just to make it easy for us to run that program, run the following copy command to copy the two library files to C:CPP: copy c:boost_1_58_0_1stageliblibboost_random-mgw49-mt-1_58.dll c:cpp copy c:boost_1_58_0_1stageliblibboost_system-mgw49-mt-1_58.dll c:cpp And now the program should run smoothly. In order to create a network application, we are going to use the Boost.Asio library. We do not find Boost.Asio—the library we are going to use to create a network application—in the non-header-only library. It seems that we do not need to build the boost library since Boost.Asio is header-only library. This is true, but since Boost.Asio depends on Boost.System and Boost.System needs to be built before being used, it is important to build Boost first before we can use it to create our network application. For option –I and –L, the compiler does not care if we use backslash () or slash (/) to separate each directory name in the path because the compiler can handle both Windows and Unix path styles. Summary We saw that Boost C++ libraries were developed to complement the standard C++ library We have also been able to set up our MinGW compiler in order to compile the code which contains Boost libraries and build the binaries of libraries which have to be compiled separately. Please remember that though we can use the Boost.Asio library as a header-only library, it is better to build all Boost libraries by using the Boost.Build library. It will be easy for us to use all libraries without worrying about compiling failure. Resources for Article:   Further resources on this subject: Actors and Pawns[article] What is Quantitative Finance?[article] Program structure, execution flow, and runtime objects [article]
Read more
  • 0
  • 0
  • 13548
Modal Close icon
Modal Close icon