Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-playing-physics
Packt
03 Jun 2015
26 min read
Save for later

Playing with Physics

Packt
03 Jun 2015
26 min read
In this article by Maxime Barbier, author of the book SFML Blueprints, we will add physics into this game and turn it into a new one. By doing this, we will learn: What is a physics engine How to install and use the Box2D library How to pair the physics engine with SFML for the display How to add physics in the game In this article, we will learn the magic of physics. We will also do some mathematics but relax, it's for conversion only. Now, let's go! (For more resources related to this topic, see here.) A physics engine – késako? We will speak about physics engine, but the first question is "what is a physics engine?" so let's explain it. A physics engine is a software or library that is able to simulate Physics, for example, the Newton-Euler equation that describes the movement of a rigid body. A physics engine is also able to manage collisions, and some of them can deal with soft bodies and even fluids. There are different kinds of physics engines, mainly categorized into real-time engine and non-real-time engine. The first one is mostly used in video games or simulators and the second one is used in high performance scientific simulation, in the conception of special effects in cinema and animations. As our goal is to use the engine in a video game, let's focus on real-time-based engine. Here again, there are two important types of engines. The first one is for 2D and the other for 3D. Of course you can use a 3D engine in a 2D world, but it's preferable to use a 2D engine for an optimization purpose. There are plenty of engines, but not all of them are open source. 3D physics engines For 3D games, I advise you to use the Bullet physics library. This was integrated in the Blender software, and was used in the creation of some commercial games and also in the making of films. This is a really good engine written in C/C++ that can deal with rigid and soft bodies, fluids, collisions, forces… and all that you need. 2D physics engines As previously said, in a 2D environment, you can use a 3D physics engine; you just have to ignore the depth (Z axes). However, the most interesting thing is to use an engine optimized for the 2D environment. There are several engines like this one and the most famous ones are Box2D and Chipmunk. Both of them are really good and none of them are better than the other, but I had to make a choice, which was Box2D. I've made this choice not only because of its C++ API that allows you to use overload, but also because of the big community involved in the project. Physics engine comparing game engine Do not mistake a physics engine for a game engine. A physics engine only simulates a physical world without anything else. There are no graphics, no logics, only physics simulation. On the contrary, a game engine, most of the time includes a physics engine paired with a render technology (such as OpenGL or DirectX). Some predefined logics depend on the goal of the engine (RPG, FPS, and so on) and sometimes artificial intelligence. So as you can see, a game engine is more complete than a physics engine. The two mostly known engines are Unity and Unreal engine, which are both very complete. Moreover, they are free for non-commercial usage. So why don't we directly use a game engine? This is a good question. Sometimes, it's better to use something that is already made, instead of reinventing it. However, do we really need all the functionalities of a game engine for this project? More importantly, what do we need it for? Let's see the following: A graphic output Physics engine that can manage collision Nothing else is required. So as you can see, using a game engine for this project would be like killing a fly with a bazooka. I hope that you have understood the aim of a physics engine, the differences between a game and physics engine, and the reason for the choices made for the project. Using Box2D As previously said, Box2D is a physics engine. It has a lot of features, but the most important for the project are the following (taken from the Box2D documentation): Collision: This functionality is very interesting as it allows our tetrimino to interact with each other Continuous collision detection Rigid bodies (convex polygons and circles) Multiple shapes per body Physics: This functionality will allow a piece to fall down and more Continuous physics with the time of impact solver Joint limits, motors, and friction Fairly accurate reaction forces/impulses As you can see, Box2D provides all that we need in order to build our game. There are a lot of other features usable with this engine, but they don't interest us right now so I will not describe them in detail. However, if you are interested, you can take a look at the official website for more details on the Box2D features (http://box2d.org/about/). It's important to note that Box2D uses meters, kilograms, seconds, and radians for the angle as units; SFML uses pixels, seconds, and degrees. So we will need to make some conversions. I will come back to this later. Preparing Box2D Now that Box2D is introduced, let's install it. You will find the list of available versions on the Google code project page at https://code.google.com/p/box2d/downloads/list. Currently, the latest stable version is 2.3. Once you have downloaded the source code (from compressed file or using SVN), you will need to build it. Install Once you have successfully built your Box2D library, you will need to configure your system or IDE to find the Box2D library and headers. The newly built library can be found in the /path/to/Box2D/build/Box2D/ directory and is named libBox2D.a. On the other hand, the headers are located in the path/to/Box2D/Box2D/ directory. If everything is okay, you will find a Box2D.h file in the folder. On Linux, the following command adds Box2D to your system without requiring any configuration: sudo make install Pairing Box2D and SFML Now that Box2D is installed and your system is configured to find it, let's build the physics "hello world": a falling square. It's important to note that Box2D uses meters, kilograms, seconds, and radian for angle as units; SFML uses pixels, seconds, and degrees. So we will need to make some conversions. Converting radians to degrees or vice versa is not difficult, but pixels to meters… this is another story. In fact, there is no way to convert a pixel to meter, unless if the number of pixels per meter is fixed. This is the technique that we will use. So let's start by creating some utility functions. We should be able to convert radians to degrees, degrees to radians, meters to pixels, and finally pixels to meters. We will also need to fix the pixel per meter value. As we don't need any class for these functions, we will define them in a namespace converter. This will result as the following code snippet: namespace converter {    constexpr double PIXELS_PER_METERS = 32.0;    constexpr double PI = 3.14159265358979323846;      template<typename T>    constexpr T pixelsToMeters(const T& x){return x/PIXELS_PER_METERS;};      template<typename T>    constexpr T metersToPixels(const T& x){return x*PIXELS_PER_METERS;};      template<typename T>    constexpr T degToRad(const T& x){return PI*x/180.0;};      template<typename T>    constexpr T radToDeg(const T& x){return 180.0*x/PI;} } As you can see, there is no difficulty here. We start to define some constants and then the convert functions. I've chosen to make the function template to allow the use of any number type. In practice, it will mostly be double or int. The conversion functions are also declared as constexpr to allow the compiler to calculate the value at compile time if it's possible (for example, with constant as a parameter). It's interesting because we will use this primitive a lot. Box2D, how does it work? Now that we can convert SFML unit to Box2D unit and vice versa, we can pair Box2D with SFML. But first, how exactly does Box2D work? Box2D works a lot like a physics engine: You start by creating an empty world with some gravity. Then, you create some object patterns. Each pattern contains the shape of the object position, its type (static or dynamic), and some other characteristics such as its density, friction, and energy restitution. You ask the world to create a new object defined by the pattern. In each game loop, you have to update the physical world with a small step such as our world in the games we've already made. Because the physics engine does not display anything on the screen, we will need to loop all the objects and display them by ourselves. Let's start by creating a simple scene with two kinds of objects: a ground and square. The ground will be fixed and the squares will not. The square will be generated by a user event: mouse click. This project is very simple, but the goal is to show you how to use Box2D and SFML together with a simple case study. A more complex one will come later. We will need three functionalities for this small project to: Create a shape Display the world Update/fill the world Of course there is also the initialization of the world and window. Let's start with the main function: As always, we create a window for the display and we limit the FPS number to 60. I will come back to this point with the displayWorld function. We create the physical world from Box2D, with gravity as a parameter. We create a container that will store all the physical objects for the memory clean purpose. We create the ground by calling the createBox function (explained just after). Now it is time for the minimalist game loop:    Close event managements    Create a box by detecting that the right button of the mouse is pressed Finally, we clean the memory before exiting the program: int main(int argc,char* argv[]) {    sf::RenderWindow window(sf::VideoMode(800, 600, 32), "04_Basic");    window.setFramerateLimit(60);    b2Vec2 gravity(0.f, 9.8f);    b2World world(gravity);    std::list<b2Body*> bodies;    bodies.emplace_back(book::createBox(world,400,590,800,20,b2_staticBody));      while(window.isOpen()) {        sf::Event event;        while(window.pollEvent(event)) {            if (event.type == sf::Event::Closed)                window.close();        }        if (sf::Mouse::isButtonPressed(sf::Mouse::Left)) {            int x = sf::Mouse::getPosition(window).x;            int y = sf::Mouse::getPosition(window).y;            bodies.emplace_back(book::createBox(world,x,y,32,32));        }        displayWorld(world,window);    }      for(b2Body* body : bodies) {        delete static_cast<sf::RectangleShape*>(body->GetUserData());        world.DestroyBody(body);    }    return 0; } For the moment, except the Box2D world, nothing should surprise you so let's continue with the box creation. This function is under the book namespace. b2Body* createBox(b2World& world,int pos_x,int pos_y, int size_x,int size_y,b2BodyType type = b2_dynamicBody) {    b2BodyDef bodyDef;    bodyDef.position.Set(converter::pixelsToMeters<double>(pos_x),                         converter::pixelsToMeters<double>(pos_y));    bodyDef.type = type;    b2PolygonShape b2shape;    b2shape.SetAsBox(converter::pixelsToMeters<double>(size_x/2.0),                    converter::pixelsToMeters<double>(size_y/2.0));      b2FixtureDef fixtureDef;    fixtureDef.density = 1.0;    fixtureDef.friction = 0.4;    fixtureDef.restitution= 0.5;    fixtureDef.shape = &b2shape;      b2Body* res = world.CreateBody(&bodyDef);    res->CreateFixture(&fixtureDef);      sf::Shape* shape = new sf::RectangleShape(sf::Vector2f(size_x,size_y));    shape->setOrigin(size_x/2.0,size_y/2.0);    shape->setPosition(sf::Vector2f(pos_x,pos_y));                                                   if(type == b2_dynamicBody)        shape->setFillColor(sf::Color::Blue);    else        shape->setFillColor(sf::Color::White);      res->SetUserData(shape);      return res; } This function contains a lot of new functionalities. Its goal is to create a rectangle of a specific size at a predefined position. The type of this rectangle is also set by the user (dynamic or static). Here again, let's explain the function step-by-step: We create b2BodyDef. This object contains the definition of the body to create. So we set the position and its type. This position will be in relation to the gravity center of the object. Then, we create b2Shape. This is the physical shape of the object, in our case, a box. Note that the SetAsBox() method doesn't take the same parameter as sf::RectangleShape. The parameters are half the size of the box. This is why we need to divide the values by two. We create b2FixtureDef and initialize it. This object holds all the physical characteristics of the object such as its density, friction, restitution, and shape. Then, we properly create the object in the physical world. Now, we create the display of the object. This will be more familiar because we will only use SFML. We create a rectangle and set its position, origin, and color. As we need to associate and display SFML object to the physical object, we use a functionality of Box2D: the SetUserData() function. This function takes void* as a parameter and internally holds it. So we use it to keep track of our SFML shape. Finally, the body is returned by the function. This pointer has to be stored to clean the memory later. This is the reason for the body's container in main(). Now, we have the capability to simply create a box and add it to the world. Now, let's render it to the screen. This is the goal of the displayWorld function: void displayWorld(b2World& world,sf::RenderWindow& render) {    world.Step(1.0/60,int32(8),int32(3));    render.clear();    for (b2Body* body=world.GetBodyList(); body!=nullptr; body=body->GetNext())    {          sf::Shape* shape = static_cast<sf::Shape*>(body->GetUserData());        shape->setPosition(converter::metersToPixels(body->GetPosition().x),        converter::metersToPixels(body->GetPosition().y));        shape->setRotation(converter::radToDeg<double>(body->GetAngle()));        render.draw(*shape);    }    render.display(); } This function takes the physics world and window as a parameter. Here again, let's explain this function step-by-step: We update the physical world. If you remember, we have set the frame rate to 60. This is why we use 1,0/60 as a parameter here. The two others are for precision only. In a good code, the time step should not be hardcoded as here. We have to use a clock to be sure that the value will always be the same. Here, it has not been the case to focus on the important part: physics. We reset the screen, as usual. Here is the new part: we loop the body stored by the world and get back the SFML shape. We update the SFML shape with the information taken from the physical body and then render it on the screen. Finally, we render the result on the screen. As you can see, it's not really difficult to pair SFML with Box2D. It's not a pain to add it. However, we have to take care of the data conversion. This is the real trap. Pay attention to the precision required (int, float, double) and everything should be fine. Now that you have all the keys in hand, let's build a real game with physics. Adding physics to a game Now that Box2D is introduced with a basic project, let's focus on the real one. We will modify our basic Tetris to get Gravity-Tetris alias Gravitris. The game control will be the same as in Tetris, but the game engine will not be. We will replace the board with a real physical engine. With this project, we will reuse a lot of work previously done. As already said, the goal of some of our classes is to be reusable in any game using SFML. Here, this will be made without any difficulties as you will see. The classes concerned are those you deal with user event Action, ActionMap, ActionTarget—but also Configuration and ResourceManager. There are still some changes that will occur in the Configuration class, more precisely, in the enums and initialization methods of this class because we don't use the exact same sounds and events that were used in the Asteroid game. So we need to adjust them to our needs. Enough with explanations, let's do it with the following code: class Configuration {    public:        Configuration() = delete;        Configuration(const Configuration&) = delete;        Configuration& operator=(const Configuration&) = delete;               enum Fonts : int {Gui};        static ResourceManager<sf::Font,int> fonts;               enum PlayerInputs : int { TurnLeft,TurnRight, MoveLeft, MoveRight,HardDrop};        static ActionMap<int> playerInputs;               enum Sounds : int {Spawn,Explosion,LevelUp,};        static ResourceManager<sf::SoundBuffer,int> sounds;               enum Musics : int {Theme};        static ResourceManager<sf::Music,int> musics;               static void initialize();           private:        static void initTextures();        static void initFonts();        static void initSounds();        static void initMusics();        static void initPlayerInputs(); }; As you can see, the changes are in the enum, more precisely in Sounds and PlayerInputs. We change the values into more adapted ones to this project. We still have the font and music theme. Now, take a look at the initialization methods that have changed: void Configuration::initSounds() {    sounds.load(Sounds::Spawn,"media/sounds/spawn.flac");    sounds.load(Sounds::Explosion,"media/sounds/explosion.flac");    sounds.load(Sounds::LevelUp,"media/sounds/levelup.flac"); } void Configuration::initPlayerInputs() {    playerInputs.map(PlayerInputs::TurnRight,Action(sf::Keyboard::Up));    playerInputs.map(PlayerInputs::TurnLeft,Action(sf::Keyboard::Down));    playerInputs.map(PlayerInputs::MoveLeft,Action(sf::Keyboard::Left));    playerInputs.map(PlayerInputs::MoveRight,Action(sf::Keyboard::Right));  playerInputs.map(PlayerInputs::HardDrop,Action(sf::Keyboard::Space,    Action::Type::Released)); } No real surprises here. We simply adjust the resources to our needs for the project. As you can see, the changes are really minimalistic and easily done. This is the aim of all reusable modules or classes. Here is a piece of advice, however: keep your code as modular as possible, this will allow you to change a part very easily and also to import any generic part of your project to another one easily. The Piece class Now that we have the configuration class done, the next step is the Piece class. This class will be the most modified one. Actually, as there is too much change involved, let's build it from scratch. A piece has to be considered as an ensemble of four squares that are independent from one another. This will allow us to split a piece at runtime. Each of these squares will be a different fixture attached to the same body, the piece. We will also need to add some force to a piece, especially to the current piece, which is controlled by the player. These forces can move the piece horizontally or can rotate it. Finally, we will need to draw the piece on the screen. The result will show the following code snippet: constexpr int BOOK_BOX_SIZE = 32; constexpr int BOOK_BOX_SIZE_2 = BOOK_BOX_SIZE / 2; class Piece : public sf::Drawable {    public:        Piece(const Piece&) = delete;        Piece& operator=(const Piece&) = delete;          enum TetriminoTypes {O=0,I,S,Z,L,J,T,SIZE};        static const sf::Color TetriminoColors[TetriminoTypes::SIZE];          Piece(b2World& world,int pos_x,int pos_y,TetriminoTypes type,float rotation);        ~Piece();        void update();        void rotate(float angle);        void moveX(int direction);        b2Body* getBody()const;      private:        virtual void draw(sf::RenderTarget& target, sf::RenderStates states) const override;        b2Fixture* createPart((int pos_x,int pos_y,TetriminoTypes type); ///< position is relative to the piece int the matrix coordinate (0 to 3)        b2Body * _body;        b2World& _world; }; Some parts of the class don't change such as the TetriminoTypes and TetriminoColors enums. This is normal because we don't change any piece's shape or colors. The rest is still the same. The implementation of the class, on the other side, is very different from the precedent version. Let's see it: Piece::Piece(b2World& world,int pos_x,int pos_y,TetriminoTypes type,float rotation) : _world(world) {    b2BodyDef bodyDef;    bodyDef.position.Set(converter::pixelsToMeters<double>(pos_x),    converter::pixelsToMeters<double>(pos_y));    bodyDef.type = b2_dynamicBody;    bodyDef.angle = converter::degToRad(rotation);    _body = world.CreateBody(&bodyDef);      switch(type)    {        case TetriminoTypes::O : {            createPart((0,0,type); createPart((0,1,type);            createPart((1,0,type); createPart((1,1,type);        }break;        case TetriminoTypes::I : {            createPart((0,0,type); createPart((1,0,type);             createPart((2,0,type); createPart((3,0,type);        }break;        case TetriminoTypes::S : {            createPart((0,1,type); createPart((1,1,type);            createPart((1,0,type); createPart((2,0,type);        }break;        case TetriminoTypes::Z : {            createPart((0,0,type); createPart((1,0,type);            createPart((1,1,type); createPart((2,1,type);        }break;        case TetriminoTypes::L : {            createPart((0,1,type); createPart((0,0,type);            createPart((1,0,type); createPart((2,0,type);        }break;        case TetriminoTypes::J : {            createPart((0,0,type); createPart((1,0,type);            createPart((2,0,type); createPart((2,1,type);        }break;        case TetriminoTypes::T : {            createPart((0,0,type); createPart((1,0,type);            createPart((1,1,type); createPart((2,0,type);        }break;        default:break;    }    body->SetUserData(this);    update(); } The constructor is the most important method of this class. It initializes the physical body and adds each square to it by calling createPart(). Then, we set the user data to the piece itself. This will allow us to navigate through the physics to SFML and vice versa. Finally, we synchronize the physical object to the drawable by calling the update() function: Piece::~Piece() {    for(b2Fixture* fixture=_body->GetFixtureList();fixture!=nullptr;    fixture=fixture->GetNext()) {        sf::ConvexShape* shape = static_cast<sf::ConvexShape*>(fixture->GetUserData());        fixture->SetUserData(nullptr);        delete shape;    }    _world.DestroyBody(_body); } The destructor loop on all the fixtures attached to the body, destroys all the SFML shapes and then removes the body from the world: b2Fixture* Piece::createPart((int pos_x,int pos_y,TetriminoTypes type) {    b2PolygonShape b2shape;    b2shape.SetAsBox(converter::pixelsToMeters<double>(BOOK_BOX_SIZE_2),    converter::pixelsToMeters<double>(BOOK_BOX_SIZE_2)    ,b2Vec2(converter::pixelsToMeters<double>(BOOK_BOX_SIZE_2+(pos_x*BOOK_BOX_SIZE)), converter::pixelsToMeters<double>(BOOK_BOX_SIZE_2+(pos_y*BOOK_BOX_SIZE))),0);      b2FixtureDef fixtureDef;    fixtureDef.density = 1.0;    fixtureDef.friction = 0.5;    fixtureDef.restitution= 0.4;    fixtureDef.shape = &b2shape;      b2Fixture* fixture = _body->CreateFixture(&fixtureDef);      sf::ConvexShape* shape = new sf::ConvexShape((unsigned int) b2shape.GetVertexCount());    shape->setFillColor(TetriminoColors[type]);    shape->setOutlineThickness(1.0f);    shape->setOutlineColor(sf::Color(128,128,128));    fixture->SetUserData(shape);       return fixture; } This method adds a square to the body at a specific place. It starts by creating a physical shape as the desired box and then adds this to the body. It also creates the SFML square that will be used for the display, and it will attach this as user data to the fixture. We don't set the initial position because the constructor will do it. void Piece::update() {    const b2Transform& xf = _body->GetTransform();       for(b2Fixture* fixture = _body->GetFixtureList(); fixture != nullptr;    fixture=fixture->GetNext()) {        sf::ConvexShape* shape = static_cast<sf::ConvexShape*>(fixture->GetUserData());        const b2PolygonShape* b2shape = static_cast<b2PolygonShape*>(fixture->GetShape());        const uint32 count = b2shape->GetVertexCount();        for(uint32 i=0;i<count;++i) {            b2Vec2 vertex = b2Mul(xf,b2shape->m_vertices[i]);            shape->setPoint(i,sf::Vector2f(converter::metersToPixels(vertex.x),            converter::metersToPixels(vertex.y)));        }    } } This method synchronizes the position and rotation of all the SFML shapes from the physical position and rotation calculated by Box2D. Because each piece is composed of several parts—fixture—we need to iterate through them and update them one by one. void Piece::rotate(float angle) {    body->ApplyTorque((float32)converter::degToRad(angle),true); } void Piece::moveX(int direction) {    body->ApplyForceToCenter(b2Vec2(converter::pixelsToMeters(direction),0),true); } These two methods add some force to the object to move or rotate it. We forward the job to the Box2D library. b2Body* Piece::getBody()const {return _body;}   void Piece::draw(sf::RenderTarget& target, sf::RenderStates states) const {    for(const b2Fixture* fixture=_body->GetFixtureList();fixture!=nullptr; fixture=fixture->GetNext()) {        sf::ConvexShape* shape = static_cast<sf::ConvexShape*>(fixture->GetUserData());        if(shape)            target.draw(*shape,states);    } } This function draws the entire piece. However, because the piece is composed of several parts, we need to iterate on them and draw them one by one in order to display the entire piece. This is done by using the user data saved in the fixtures. Summary Since the usage of a physics engine has its own particularities such as the units and game loop, we have learned how to deal with them. Finally, we learned how to pair Box2D with SFML, integrate our fresh knowledge to our existing Tetris project, and build a new funny game. Resources for Article: Further resources on this subject: Skinning a character [article] Audio Playback [article] Sprites in Action [article]
Read more
  • 0
  • 0
  • 10498

article-image-object-oriented-javascript-backbone-classes
Packt
03 Jun 2015
9 min read
Save for later

Object-Oriented JavaScript with Backbone Classes

Packt
03 Jun 2015
9 min read
In this Article by Jeremy Walker, author of the book Backbone.js Essentials, we will explore the following topics: The differences between JavaScript's class system and the class systems of traditional object-oriented languages How new, this, and prototype enable JavaScript's class system Extend, Backbone's much easier mechanism for creating subclasses (For more resources related to this topic, see here.) JavaScript's class system Programmers who use JavaScript can use classes to encapsulate units of logic in the same way as programmers of other languages. However, unlike those languages, JavaScript relies on a less popular form of inheritance known as prototype-based inheritance. Since Backbone classes are, at their core, just JavaScript classes, they too rely on the prototype system and can be subclassed in the same way as any other JavaScript class. For instance, let's say you wanted to create your own Book subclass of the Backbone Model class with additional logic that Model doesn't have, such as book-related properties and methods. Here's how you can create such a class using only JavaScript's native object-oriented capabilities: // Define Book's Initializervar Book = function() {// define Book's default propertiesthis.currentPage = 1;this.totalPages = 1;}// Define book's parent classBook.prototype = new Backbone.Model();// Define a method of BookBook.prototype.turnPage = function() {this.currentPage += 1;return this.currentPage;} If you've never worked with prototypes in JavaScript, the preceding code may look a little intimidating. Fortunately, Backbone provides a much easier and easier to read mechanism for creating subclasses. However, since that system is built on top of JavaScript's native system, it's important to first understand how the native system works. This understanding will be helpful later when you want to do more complex class-related tasks, such as calling a method defined on a parent class. The new keyword The new keyword is a relatively simple but extremely useful part of JavaScript's class system. The first thing that you need to understand about new is that it doesn't create objects in the same way as other languages. In JavaScript, every variable is either a function, object, or primitive, which means that when we refer to a class, what we're really referring to is a specially designed initialization function. Creating this class-like function is as simple as defining a function that modifies this and then using the new keyword to call that function. Normally, when you call a function, its this is obvious. For instance, when you call the turnPage method of a book object, the this method inside turnPage will be set to this book object, as shown here: var simpleBook = {currentPage: 3, pages: 60};simpleBook.turnPage = function() {this.currentPage += 1;return this.currentPage;}simpleBook.turnPage(); // == 4 Calling a function that isn't attached to an object (in other words, a function that is not a method) results in this being set to the global scope. In a web browser, this means the window object: var testGlobalThis = function() {alert(this);}testGlobalThis(); // alerts window When we use the new keyword before calling an initialization function, three things happen (well, actually four, but we'll wait to explain the fourth one until we explain prototypes): JavaScript creates a brand new object ({})for us JavaScript sets the this method inside the initialization function to the newly created object After the function finishes, JavaScript ignores the normal return value and instead returns the object that was created As you can see, although the new keyword is simple, it's nevertheless important because it allows you to treat initialization functions as if they really are actual classes. At the same time, it does so without violating the JavaScript principle that all variables must either be a function, object, or primitive. Prototypal inheritance That's all well and good, but if JavaScript has no true concept of classes, how can we create subclasses? As it turns out, every object in JavaScript has two special properties to solve this problem: prototype and __proto__ (hidden). These two properties are, perhaps, the most commonly misunderstood aspects of JavaScript, but once you learn how they work, they are actually quite simple to use. When you call a method on an object or try to retrieve a property JavaScript first checks whether the object has the method or property defined in the object itself. In other words if you define a method such as this one: book.turnPage = function()this.currentPage += 1;}; JavaScript will use that definition first when you call turnPage. In real-world code, however, you will almost never want to put methods directly in your objects for two reasons. First, doing that will result in duplicate copies of those methods, as each instance of your class will have its own separate copy. Second, adding methods in this way requires an extra step, and that step can be easily forgotten when you create new instances. If the object doesn't have a turnPage method defined in it, JavaScript will next check the object's hidden __proto__ property. If this __proto__ object doesn't have a turnPage method, then JavaScript will look at the __proto__ property on the object's __proto__. If that doesn't have the method, JavaScript continues to check the __proto__ of the __proto__ of the __proto__ and keeps checking each successive __proto__ until it has exhausted the chain. This is similar to single-class inheritance in more traditional object-oriented languages, except that instead of going through a class chain, JavaScript instead uses a prototype chain. Just as in an object-oriented language we wind up with only a single copy of each method, but instead of the method being defined on the class itself, it's defined on the class's prototype. In a future version of JavaScript (ES6), it will be possible to work with the __proto__ object directly, but for now, the only way to actually see the __proto__ property is to use your browser's debugging tool (for instance, the Chrome Developer Tools debugger):   This means that you can't use this line of code: book.__proto__.turnPage(); Also, you can't use the following code: book.__proto__ = {turnPage: function() {this.currentPage += 1;}}; But, if you can't manipulate __proto__ directly, how can you take advantage of it? Fortunately, it is possible to manipulate __proto__, but you can only do this indirectly by manipulating prototype. Do you remember I mentioned that the new keyword actually does four things? The fourth thing is that it sets the __proto__ property of the new object it creates to the prototype property of the initialization function. In other words, if you want to add a turnPage method to every new instance of Book that you create, you can assign this turnPage method to the prototype property of the Book initialization function, For example: var Book = function() {};Book.prototype.turnPage = function() {this.currentPage += 1;};var book = new Book();book.turnPage();// this works because book.__proto__ == Book.prototype Since these concepts often cause confusion, let's briefly recap: Every object has a prototype property and a hidden __proto__ property An object's __proto__ property is set to the prototype property of its constructor when it is first created and cannot be changed Whenever JavaScript can't find a property or method on an object, it "checks each step of the __proto__ chain until it finds one or until it runs "out of chain Extending Backbone classes With that explanation out of the way, we can finally get down to the workings of Backbone's subclassing system, which revolves around Backbone's extend method. To use extend, you simply call it from the class that your new subclass will be based on, and extend will return the new subclass. This new subclass will have its __proto__ property set to the prototype property of its parent class, allowing objects created with the new subclass to access all the properties and methods of the parent class. Take an example of the following code snippet: var Book = Backbone.Model.extend();// Book.prototype.__proto__ == Backbone.Model.prototype;var book = new Book();book.destroy(); In the preceding example, the last line works because JavaScript will look up the __proto__ chain, find the Model method destroy, and use it. In other words, all the functionality of our original class has been inherited by our new class. But of course, extend wouldn't be exciting if all it can do is make exact clones of the parent classes, which is why extend takes a properties object as its first argument. Any properties or methods on this object will be added to the new class's prototype. For instance, let's try making our Book class a little more interesting by adding a property and a method: var Book = Backbone.Model.extend({currentPage: 1,turnPage: function() {this.currentPage += 1;}});var book = new Book();book.currentPage; // == 1book.turnPage(); // increments book.currentPage by one The extend method also allows you to create static properties or methods, or in other words, properties or methods that live on the class rather than on objects created from that class. These static properties and methods are passed in as the second classProperties argument to extend. Here's a quick example of how to add a static method to our Book class: var Book = Backbone.Model.extend({}, {areBooksGreat: function() {alert("yes they are!");}});Book.areBooksGreat(); // alerts "yes they are!"var book = new Book();book.areBooksGreat(); // fails because static methods must becalled on a class As you can see, there are several advantages to Backbone's approach to inheritance over the native JavaScript approach. First, the word prototype did not appear even once in any of the previously mentioned code; while you still need to understand how prototype works, you don't have to think about it just to create a class. Another benefit is that the entire class definition is contained within a single extend call, keeping all of the class's parts together visually. Also, when we use extend, the various pieces of logic that make up the class are ordered the same way as in most other programming languages, defining the super class first and then the initializer and properties, instead of the other way around. Summary In this article, we explored how JavaScript's native class system works and how the new, this, and prototype keywords/properties form the basis of it. We also learned how Backbone's extend method makes creating new subclasses much more convenient as well as how to use apply and call to invoke parent methods (or when providing callback functions) to preserve the desired this method. Resources for Article: Further resources on this subject: Testing Backbone.js Application [Article] Building an app using Backbone.js [Article] Organizing Backbone Applications - Structure, Optimize, and Deploy [Article]
Read more
  • 0
  • 0
  • 4865

article-image-working-touch-gestures
Packt
03 Jun 2015
5 min read
Save for later

Working with Touch Gestures

Packt
03 Jun 2015
5 min read
 In this article by Ajit Kumar, the author Sencha Charts Essentials, we will cover the following topics: Touch gestures support in Sencha Charts Using gestures on existing charts Out-of-the-box interactions Creating custom interactions using touch gestures (For more resources related to this topic, see here.) Interacting with interactions The interactions code is located under the ext/packages/sencha-charts/src/chart/interactions folder. The Ext.chart.interactions.Abstract class is the base class for all the chart interactions. Interactions must be associated with a chart by configuring interactions on it. Consider the following example: Ext.create('Ext.chart.PolarChart', {title: 'Chart',interactions: ['rotate'],... The gestures config is an important config. It is an integral part of an interaction where it tells the framework which touch gestures would be part of an interaction. It's a map where the event name is the key and the handler method name is its value. Consider the following example: gestures: {tap: 'onTapGesture',doubletap: 'onDoubleTapGesture'} Once an interaction has been associated with a chart, the framework registers the events and their handlers, as listed in the gestures config, on the chart as part of the chart initialization, as shown here:   Here is what happens during each stage of the preceding flowchart: The chart's construction starts when its constructor is called either by a call to Ext.create or xtype usage. The interactions config is applied to the AbstractChart class by the class system, which calls the applyInteractions method. The applyInteractions method sets the chart object on each of the interaction objects. This setter operation will call the updateChart method of the interaction class—Ext.chart.interactions.Abstract. The updateChart calls addChartListener to add the gesture-related events and their handlers. The addChartListener iterates through the gestures object and registers the listed events and their handlers on the chart object. Interactions work on touch as well as non-touch devices (for example, desktop). On non-touch devices, the gestures are simulated based on their mouse or pointer events. For example, mousedown is treated as a tap event. Using built-in interactions Here is a list of the built-in interactions: Crosshair: This interaction helps the user to get precise x and y values for a specific point on a chart. Because of this, it is applicable to Cartesian charts only. The x and y values are obtained by single-touch dragging on the chart. The interaction also offers additional configs: axes: This can be used to provide label text and label rectangle configs on a per axis basis using left, right, top, and bottom configs or a single config applying to all the axes. If the axes config is not specified, the axis label value is shown as the text and the rectangle will be filled with white color. lines: The interaction draws horizontal and vertical lines through the point on the chart. Line sprite attributes can be passed using horizontal or vertical configs. For example, we configure the following crosshair interaction on a CandleStick chart: interactions: [{type: 'crosshair',axes: {left: {label: { fillStyle: 'white' },rect: {fillStyle: 'pink',radius: 2}},bottom: {label: {fontSize: '14px',fontWeight: 'bold'},rect: { fillStyle: 'cyan' }}}}] The preceding configuration will produce the following output, where the labels and rectangles on the two axes have been styled as per the configuration: CrossZoom:This interaction allows the user to zoom in on a selected area of a chart using drag events. It is very useful in getting the microscopic view of your macroscopic data view. For example, the chart presents month-wise data for two years; using zoom, you can look at the values for, say, a specific month. The interaction offers an additional config—axes—using which we indicate the axes, which will be zoomed. Consider the following configuration on a CandleStick chart: interactions: [{type: 'crosszoom',axes: ['left', 'bottom']}] This will produce the following output where a user will be able to zoom in to both the left and bottom axes:   Additionally, we can control the zoom level by passing minZoom and maxZoom, as shown in the following snippet: interactions: [{type: 'crosszoom',axes: {left: {maxZoom: 8,startZoom: 2},bottom: true}}] The zoom is reset when the user double-clicks on the chart. ItemHighlight: This interaction allows the user to highlight series items in the chart. It works in conjunction with highlight config that is configured on a series. The interaction identifies and sets the highlightItem on a chart, on which the highlight and highlightCfg configs are applied. PanZoom: This interaction allows the user to navigate the data for one or more chart axes by panning and/or zooming. Navigation can be limited to particular axes. Pinch gestures are used for zooming whereas drag gestures are used for panning. For devices which do not support multiple-touch events, zooming cannot be done via pinch gestures; in this case, the interaction will allow the user to perform both zooming and panning using the same single-touch drag gesture. By default, zooming is not enabled. We can enable it by setting zoomOnPanGesture:true on the interaction, as shown here: interactions: {type: 'panzoom',zoomOnPanGesture: true} By default, all the axes are navigable. However, the panning and zooming can be controlled at axis level, as shown here: {type: 'panzoom',axes: {left: {maxZoom: 5,allowPan: false},bottom: true}} Rotate: This interaction allows the user to rotate a polar chart about its centre. It implements the rotation using the single-touch drag gestures. This interaction does not have any additional config. RotatePie3D: This is an extension of the Rotate interaction to rotate a Pie3D chart. This does not have any additional config. Summary In this article, you learned how Sencha Charts offers interaction classes to build interactivity into the charts. We looked at the out-of-the-box interactions, their specific configurations, and how to use them on different types of charts. Resources for Article: Further resources on this subject: The Various Components in Sencha Touch [Article] Creating a Simple Application in Sencha Touch [Article] Sencha Touch: Catering Form Related Needs [Article]
Read more
  • 0
  • 0
  • 2044

article-image-reactive-data-streams
Packt
03 Jun 2015
11 min read
Save for later

Reactive Data Streams

Packt
03 Jun 2015
11 min read
In this article by Shiti Saxena, author of the book Mastering Play Framework for Scala, we will discuss the Iteratee approach used to handle such situations. This article also covers the basics of handling data streams with a brief explanation of the following topics: Iteratees Enumerators Enumeratees (For more resources related to this topic, see here.) Iteratee Iteratee is defined as a trait, Iteratee[E, +A], where E is the input type and A is the result type. The state of an Iteratee is represented by an instance of Step, which is defined as follows: sealed trait Step[E, +A] {def it: Iteratee[E, A] = this match {case Step.Done(a, e) => Done(a, e)case Step.Cont(k) => Cont(k)case Step.Error(msg, e) => Error(msg, e)}}object Step {//done state of an iterateecase class Done[+A, E](a: A, remaining: Input[E]) extends Step[E, A]//continuing state of an iteratee.case class Cont[E, +A](k: Input[E] => Iteratee[E, A]) extendsStep[E, A]//error state of an iterateecase class Error[E](msg: String, input: Input[E]) extends Step[E,Nothing]} The input used here represents an element of the data stream, which can be empty, an element, or an end of file indicator. Therefore, Input is defined as follows: sealed trait Input[+E] {def map[U](f: (E => U)): Input[U] = this match {case Input.El(e) => Input.El(f(e))case Input.Empty => Input.Emptycase Input.EOF => Input.EOF}}object Input {//An input elementcase class El[+E](e: E) extends Input[E]// An empty inputcase object Empty extends Input[Nothing]// An end of file inputcase object EOF extends Input[Nothing]} An Iteratee is an immutable data type and each result of processing an input is a new Iteratee with a new state. To handle the possible states of an Iteratee, there is a predefined helper object for each state. They are: Cont Done Error Let's see the definition of the readLine method, which utilizes these objects: def readLine(line: List[Array[Byte]] = Nil): Iteratee[Array[Byte],String] = Cont {case Input.El(data) => {val s = data.takeWhile(_ != 'n')if (s.length == data.length) {readLine(s :: line)} else {Done(new String(Array.concat((s :: line).reverse: _*),"UTF-8").trim(), elOrEmpty(data.drop(s.length + 1)))}}case Input.EOF => {Error("EOF found while reading line", Input.Empty)}case Input.Empty => readLine(line)} The readLine method is responsible for reading a line and returning an Iteratee. As long as there are more bytes to be read, the readLine method is called recursively. On completing the process, an Iteratee with a completed state (Done) is returned, else an Iteratee with state continuous (Cont) is returned. In case the method encounters EOF, an Iteratee with state Error is returned. In addition to these, Play Framework exposes a companion Iteratee object, which has helper methods to deal with Iteratees. The API exposed through the Iteratee object is documented at https://www.playframework.com/documentation/2.3.x/api/scala/index.html#play.api.libs.iteratee.Iteratee$. The Iteratee object is also used internally within the framework to provide some key features. For example, consider the request body parsers. The apply method of the BodyParser object is defined as follows: def apply[T](debugName: String)(f: RequestHeader =>Iteratee[Array[Byte], Either[Result, T]]): BodyParser[T] = newBodyParser[T] {def apply(rh: RequestHeader) = f(rh)override def toString = "BodyParser(" + debugName + ")"} So, to define BodyParser[T], we need to define a method that accepts RequestHeader and returns an Iteratee whose input is an Array[Byte] and results in Either[Result,T]. Let's look at some of the existing implementations to understand how this works. The RawBuffer parser is defined as follows: def raw(memoryThreshold: Int): BodyParser[RawBuffer] =BodyParser("raw, memoryThreshold=" + memoryThreshold) { request =>import play.core.Execution.Implicits.internalContextval buffer = RawBuffer(memoryThreshold)Iteratee.foreach[Array[Byte]](bytes => buffer.push(bytes)).map {_ =>buffer.close()Right(buffer)}} The RawBuffer parser uses Iteratee.forEach method and pushes the input received into a buffer. The file parser is defined as follows: def file(to: File): BodyParser[File] = BodyParser("file, to=" +to) { request =>import play.core.Execution.Implicits.internalContextIteratee.fold[Array[Byte], FileOutputStream](newFileOutputStream(to)) {(os, data) =>os.write(data)os}.map { os =>os.close()Right(to)}} The file parser uses the Iteratee.fold method to create FileOutputStream of the incoming data. Now, let's see the implementation of Enumerator and how these two pieces fit together. Enumerator Similar to the Iteratee, an Enumerator is also defined through a trait and backed by an object of the same name: trait Enumerator[E] {parent =>def apply[A](i: Iteratee[E, A]): Future[Iteratee[E, A]]...}object Enumerator{def apply[E](in: E*): Enumerator[E] = in.length match {case 0 => Enumerator.emptycase 1 => new Enumerator[E] {def apply[A](i: Iteratee[E, A]): Future[Iteratee[E, A]] =i.pureFoldNoEC {case Step.Cont(k) => k(Input.El(in.head))case _ => i}}case _ => new Enumerator[E] {def apply[A](i: Iteratee[E, A]): Future[Iteratee[E, A]] =enumerateSeq(in, i)}}...} Observe that the apply method of the trait and its companion object are different. The apply method of the trait accepts Iteratee[E, A] and returns Future[Iteratee[E, A]], while that of the companion object accepts a sequence of type E and returns an Enumerator[E]. Now, let's define a simple data flow using the companion object's apply method; first, get the character count in a given (Seq[String]) line: val line: String = "What we need is not the will to believe, butthe wish to find out."val words: Seq[String] = line.split(" ")val src: Enumerator[String] = Enumerator(words: _*)val sink: Iteratee[String, Int] = Iteratee.fold[String,Int](0)((x, y) => x + y.length)val flow: Future[Iteratee[String, Int]] = src(sink)val result: Future[Int] = flow.flatMap(_.run) The variable result has the Future[Int] type. We can now process this to get the actual count. In the preceding code snippet, we got the result by following these steps: Building an Enumerator using the companion object's apply method: val src: Enumerator[String] = Enumerator(words: _*) Getting Future[Iteratee[String, Int]] by binding the Enumerator to an Iteratee: val flow: Future[Iteratee[String, Int]] = src(sink) Flattening Future[Iteratee[String,Int]] and processing it: val result: Future[Int] = flow.flatMap(_.run) Fetching the result from Future[Int]: Thankfully, Play provides a shortcut method by merging steps 2 and 3 so that we don't have to repeat the same process every time. The method is represented by the |>>> symbol. Using the shortcut method, our code is reduced to this: val src: Enumerator[String] = Enumerator(words: _*)val sink: Iteratee[String, Int] = Iteratee.fold[String, Int](0)((x, y)=> x + y.length)val result: Future[Int] = src |>>> sink Why use this when we can simply use the methods of the data type? In this case, do we use the length method of String to get the same value (by ignoring whitespaces)? In this example, we are getting the data as a single String but this will not be the only scenario. We need ways to process continuous data, such as a file upload, or feed data from various networking sites, and so on. For example, suppose our application receives heartbeats at a fixed interval from all the devices (such as cameras, thermometers, and so on) connected to it. We can simulate a data stream using the Enumerator.generateM method: val dataStream: Enumerator[String] = Enumerator.generateM {Promise.timeout(Some("alive"), 100 millis)} In the preceding snippet, the "alive" String is produced every 100 milliseconds. The function passed to the generateM method is called whenever the Iteratee bound to the Enumerator is in the Cont state. This method is used internally to build enumerators and can come in handy when we want to analyze the processing for an expected data stream. An Enumerator can be created from a file, InputStream, or OutputStream. Enumerators can be concatenated or interleaved. The Enumerator API is documented at https://www.playframework.com/documentation/2.3.x/api/scala/index.html#play.api.libs.iteratee.Enumerator$. Using the Concurrent object The Concurrent object is a helper that provides utilities for using Iteratees, enumerators, and Enumeratees concurrently. Two of its important methods are: Unicast: It is useful when sending data to a single iterate. Broadcast: It facilitates sending the same data to multiple Iteratees concurrently. Unicast For example, the character count example in the previous section can be implemented as follows: val unicastSrc = Concurrent.unicast[String](channel =>channel.push(line))val unicastResult: Future[Int] = unicastSrc |>>> sink The unicast method accepts the onStart, onError, and onComplete handlers. In the preceding code snippet, we have provided the onStart method, which is mandatory. The signature of unicast is this: def unicast[E](onStart: (Channel[E]) ⇒ Unit,onComplete: ⇒ Unit = (),onError: (String, Input[E]) ⇒ Unit = (_: String, _: Input[E])=> ())(implicit ec: ExecutionContext): Enumerator[E] {…} So, to add a log for errors, we can define the onError handler as follows: val unicastSrc2 = Concurrent.unicast[String](channel => channel.push(line),onError = { (msg, str) => Logger.error(s"encountered $msg for$str")}) Now, let's see how broadcast works. Broadcast The broadcast[E] method creates an enumerator and a channel and returns a (Enumerator[E], Channel[E]) tuple. The enumerator and channel thus obtained can be used to broadcast data to multiple Iteratees: val (broadcastSrc: Enumerator[String], channel:Concurrent.Channel[String]) = Concurrent.broadcast[String]private val vowels: Seq[Char] = Seq('a', 'e', 'i', 'o', 'u')def getVowels(str: String): String = {val result = str.filter(c => vowels.contains(c))result}def getConsonants(str: String): String = {val result = str.filterNot(c => vowels.contains(c))result}val vowelCount: Iteratee[String, Int] = Iteratee.fold[String,Int](0)((x, y) => x + getVowels(y).length)val consonantCount: Iteratee[String, Int] =Iteratee.fold[String, Int](0)((x, y) => x +getConsonants(y).length)val vowelInfo: Future[Int] = broadcastSrc |>>> vowelCountval consonantInfo: Future[Int] = broadcastSrc |>>>consonantCountwords.foreach(w => channel.push(w))channel.end()vowelInfo onSuccess { case count => println(s"vowels:$count")}consonantInfo onSuccess { case count =>println(s"consonants:$count")} Enumeratee Enumeratee is also defined using a trait and its companion object with the same Enumeratee name. It is defined as follows: trait Enumeratee[From, To] {...def applyOn[A](inner: Iteratee[To, A]): Iteratee[From,Iteratee[To, A]]def apply[A](inner: Iteratee[To, A]): Iteratee[From, Iteratee[To,A]] = applyOn[A](inner)...} An Enumeratee transforms the Iteratee given to it as input and returns a new Iteratee. Let's look at a method that defines an Enumeratee by implementing the applyOn method. An Enumeratee's flatten method accepts Future[Enumeratee] and returns an another Enumeratee, which is defined as follows: def flatten[From, To](futureOfEnumeratee:Future[Enumeratee[From, To]]) = new Enumeratee[From, To] {def applyOn[A](it: Iteratee[To, A]): Iteratee[From,Iteratee[To, A]] =Iteratee.flatten(futureOfEnumeratee.map(_.applyOn[A](it))(dec))} In the preceding snippet, applyOn is called on the Enumeratee whose future is passed and dec is defaultExecutionContext. Defining an Enumeratee using the companion object is a lot simpler. The companion object has a lot of methods to deal with enumeratees, such as map, transform, collect, take, filter, and so on. The API is documented at https://www.playframework.com/documentation/2.3.x/api/scala/index.html#play.api.libs.iteratee.Enumeratee$. Let's define an Enumeratee by working through a problem. The example we used in the previous section to find the count of vowels and consonants will not work correctly if a vowel is capitalized in a sentence, that is, the result of src |>>> vowelCount will be incorrect when the line variable is defined as follows: val line: String = "What we need is not the will to believe, but the wish to find out.".toUpperCase To fix this, let's alter the case of all the characters in the data stream to lowercase. We can use an Enumeratee to update the input provided to the Iteratee. Now, let's define an Enumeratee to return a given string in lowercase: val toSmallCase: Enumeratee[String, String] =Enumeratee.map[String] {s => s.toLowerCase} There are two ways to add an Enumeratee to the dataflow. It can be bound to the following: Enumerators Iteratees Summary In this article, we discussed the concept of Iteratees, Enumerators, and Enumeratees. We also saw how they were implemented in Play Framework and used internally. Resources for Article: Further resources on this subject: Play Framework: Data Validation Using Controllers [Article] Play Framework: Introduction to Writing Modules [Article] Integrating with other Frameworks [Article]
Read more
  • 0
  • 0
  • 2026

article-image-pointers-and-references
Packt
03 Jun 2015
14 min read
Save for later

Pointers and references

Packt
03 Jun 2015
14 min read
In this article by Ivo Balbaert, author of the book, Rust Essentials, we will go through the pointers and memory safety. (For more resources related to this topic, see here.) The stack and the heap When a program starts, by default a 2 MB chunk of memory called the stack is granted to it. The program will use its stack to store all its local variables and function parameters; for example, an i32 variable takes 4 bytes of the stack. When our program calls a function, a new stack frame is allocated to it. Through this mechanism, the stack knows the order in which the functions are called so that the functions return correctly to the calling code and possibly return values as well. Dynamically sized types, such as strings or arrays, can't be stored on the stack. For these values, a program can request memory space on its heap, so this is a potentially much bigger piece of memory than the stack. When possible, stack allocation is preferred over heap allocation because accessing the stack is a lot more efficient. Lifetimes All variables in a Rust code have a lifetime. Suppose we declare an n variable with the let n = 42u32; binding. Such a value is valid from where it is declared to when it is no longer referenced, which is called the lifetime of the variable. This is illustrated in the following code snippet: fn main() { let n = 42u32; let n2 = n; // a copy of the value from n to n2 life(n); println!("{}", m); // error: unresolved name `m`. println!("{}", o); // error: unresolved name `o`. }   fn life(m: u32) -> u32 {    let o = m;    o } The lifetime of n ends when main() ends; in general, the start and end of a lifetime happen in the same scope. The words lifetime and scope are synonymous, but we generally use the word lifetime to refer to the extent of a reference. As in other languages, local variables or parameters declared in a function do not exist anymore after the function has finished executing; in Rust, we say that their lifetime has ended. This is the case for the m and o variables in the preceding code snippet, which are only known in the life function. Likewise, the lifetime of a variable declared in a nested block is restricted to that block, like phi in the following example: {    let phi = 1.618; } println!("The value of phi is {}", phi); // is error Trying to use phi when its lifetime is over results in an error: unresolved name 'phi'. The lifetime of a value can be indicated in the code by an annotation, for example 'a, which reads as lifetime where a is simply an indicator; it could also be written as 'b, 'n, or 'life. It's common to see single letters being used to represent lifetimes. In the preceding example, an explicit lifetime indication was not necessary since there were no references involved. All values tagged with the same lifetime have the same maximum lifetime. In the following example, we have a transform function that explicitly declares the lifetime of its s parameter to be 'a: fn transform<'a>(s: &'a str) { /* ... */ } Note the <'a> indication after the name of the function. In nearly all cases, this explicit indication is not needed because the compiler is smart enough to deduce the lifetimes, so we can simply write this: fn transform_without_lifetime(s: &str) { /* ... */ } Here is an example where even when we indicate a lifetime specifier 'a, the compiler does not allow our code. Let's suppose that we define a Magician struct as follows: struct Magician { name: &'static str, power: u32 } We will get an error message if we try to construct the following function: fn return_magician<'a>() -> &'a Magician { let mag = Magician { name: "Gandalf", power: 4625}; &mag } The error message is error: 'mag' does not live long enough. Why does this happen? The lifetime of the mag value ends when the return_magician function ends, but this function nevertheless tries to return a reference to the Magician value, which no longer exists. Such an invalid reference is known as a dangling pointer. This is a situation that would clearly lead to errors and cannot be allowed. The lifespan of a pointer must always be shorter than or equal to than that of the value which it points to, thus avoiding dangling (or null) references. In some situations, the decision to determine whether the lifetime of an object has ended is complicated, but in almost all cases, the borrow checker does this for us automatically by inserting lifetime annotations in the intermediate code; so, we don't have to do it. This is known as lifetime elision. For example, when working with structs, we can safely assume that the struct instance and its fields have the same lifetime. Only when the borrow checker is not completely sure, we need to indicate the lifetime explicitly; however, this happens only on rare occasions, mostly when references are returned. One example is when we have a struct with fields that are references. The following code snippet explains this: struct MagicNumbers { magn1: &u32, magn2: &u32 } This won't compile and will give us the following error: missing lifetime specifier [E0106]. Therefore, we have to change the code as follows: struct MagicNumbers<'a> { magn1: &'a u32, magn2: &'a u32 } This specifies that both the struct and the fields have the lifetime as 'a. Perform the following exercise: Explain why the following code won't compile: fn main() {    let m: &u32 = {        let n = &5u32;        &*n    };    let o = *m; } Answer the same question for this code snippet as well: let mut x = &3; { let mut y = 4; x = &y; } Copying values and the Copy trait In the code that we discussed in earlier section the value of n is copied to a new location each time n is assigned via a new let binding or passed as a function argument: let n = 42u32; // no move, only a copy of the value: let n2 = n; life(n); fn life(m: u32) -> u32 {    let o = m;    o } At a certain moment in the program's execution, we would have four memory locations that contain the copied value 42, which we can visualize as follows: Each value disappears (and its memory location is freed) when the lifetime of its corresponding variable ends, which happens at the end of the function or code block in which it is defined. Nothing much can go wrong with this Copy behavior, in which the value (its bits) is simply copied to another location on the stack. Many built-in types, such as u32 and i64, work similar to this, and this copy-value behavior is defined in Rust as the Copy trait, which u32 and i64 implement. You can also implement the Copy trait for your own type, provided all of its fields or items implement Copy. For example, the MagicNumber struct, which contains a field of the u64 type, can have the same behavior. There are two ways to indicate this: One way is to explicitly name the Copy implementation as follows: struct MagicNumber {    value: u64 } impl Copy for MagicNumber {} Otherwise, we can annotate it with a Copy attribute: #[derive(Copy)] struct MagicNumber {    value: u64 } This now means that we can create two different copies, mag and mag2, of a MagicNumber by assignment: let mag = MagicNumber {value: 42}; let mag2 = mag; They are copies because they have different memory addresses (the values shown will differ at each execution): println!("{:?}", &mag as *const MagicNumber); // address is 0x23fa88 println!("{:?}", &mag2 as *const MagicNumber); // address is 0x23fa80 The *const function is a so-called raw pointer. A type that does not implement the Copy trait is called non-copyable. Another way to accomplish this is by letting MagicNumber implement the Clone trait: #[derive(Clone)] struct MagicNumber {    value: u64 } Then, we can use clone() mag into a different object called mag3, effectively making a copy as follows: let mag3 = mag.clone(); println!("{:?}", &mag3 as *const MagicNumber); // address is 0x23fa78 mag3 is a new pointer referencing a new copy of the value of mag. Pointers The n variable in the let n = 42i32; binding is stored on the stack. Values on the stack or the heap can be accessed by pointers. A pointer is a variable that contains the memory address of some value. To access the value it points to, dereference the pointer with *. This happens automatically in simple cases such as in println! or when a pointer is given as a parameter to a method. For example, in the following code, m is a pointer containing the address of n: let m = &n; println!("The address of n is {:p}", m); println!("The value of n is {}", *m); println!("The value of n is {}", m); This prints out the following output, which differs for each program run: The address of n is 0x23fb34 The value of n is 42 The value of n is 42 So, why do we need pointers? When we work with dynamically allocated values, such as a String, that can change in size, the memory address of that value is not known at compile time. Due to this, the memory address needs to be calculated at runtime. So, to be able to keep track of it, we need a pointer for it whose value will change when the location of String in memory changes. The compiler automatically takes care of the memory allocation of pointers and the freeing up of memory when their lifetime ends. You don't have to do this yourself like in C/C++, where you could mess up by freeing memory at the wrong moment or at multiple times. The incorrect use of pointers in languages such as C++ leads to all kinds of problems. However, Rust enforces a strong set of rules at compile time called the borrow checker, so we are protected against them. We have already seen them in action, but from here onwards, we'll explain the logic behind their rules. Pointers can also be passed as arguments to functions, and they can be returned from functions, but the compiler severely restricts their usage. When passing a pointer value to a function, it is always better to use the reference-dereference &* mechanism, as shown in this example: let q = &42; println!("{}", square(q)); // 1764 fn square(k: &i32) -> i32 {    *k * *k } References In our previous example, m, which had the &n value, is the simplest form of pointer, and it is called a reference (or borrowed pointer); m is a reference to the stack-allocated n variable and has the &i32 type because it points to a value of the i32 type. In general, when n is a value of the T type, then the &n reference is of the &T type. Here, n is immutable, so m is also immutable; for example, if you try to change the value of n through m with *m = 7; you will get a cannot assign to immutable borrowed content '*m' error. Contrary to C, Rust does not let you change an immutable variable via its pointer. Since there is no danger of changing the value of n through a reference, multiple references to an immutable value are allowed; they can only be used to read the value, for example: let o = &n; println!("The address of n is {:p}", o); println!("The value of n is {}", *o); It prints out as described earlier: The address of n is 0x23fb34 The value of n is 42 We could represent this situation in the memory as follows: It is clear that working with pointers such as this or in much more complex situations necessitates much stricter rules than the Copy behavior. For example, the memory can only be freed when there are no variables or pointers associated with it anymore. And when the value is mutable, can it be changed through any of its pointers? Mutable references do exist, and they are declared as let m = &mut n. However, n also has to be a mutable value. When n is immutable, the compiler rejects the m mutable reference binding with the error, cannot borrow immutable local variable 'n' as mutable. This makes sense since immutable variables cannot be changed even when you know their memory location. To reiterate, in order to change a value through a reference, both the variable and its reference have to be mutable, as shown in the following code snippet: let mut u = 3.14f64; let v = &mut u; *v = 3.15; println!("The value of u is now {}", *v); This will print: The value of u is now 3.15. Now, the value at the memory location of u is changed to 3.15. However, note that we now cannot change (or even print) that value anymore by using the u: u = u * 2.0; variable gives us a compiler error: cannot assign to 'u' because it is borrowed. We say that borrowing a variable (by making a reference to it) freezes that variable; the original u variable is frozen (and no longer usable) until the reference goes out of scope. In addition, we can only have one mutable reference: let w = &mut u; which results in the error: cannot borrow 'u' as mutable more than once at a time. The compiler even adds the following note to the previous code line with: let v = &mut u; note: previous borrow of 'u' occurs here; the mutable borrow prevents subsequent moves, borrows, or modification of `u` until the borrow ends. This is logical; the compiler is (rightfully) concerned that a change to the value of u through one reference might change its memory location because u might change in size, so it will not fit anymore within its previous location and would have to be relocated to another address. This would render all other references to u as invalid, and even dangerous, because through them we might inadvertently change another variable that has taken up the previous location of u! A mutable value can also be changed by passing its address as a mutable reference to a function, as shown in this example: let mut m = 7; add_three_to_magic(&mut m); println!("{}", m); // prints out 10 With the function add_three_to_magic declared as follows: fn add_three_to_magic(num: &mut i32) {    *num += 3; // value is changed in place through += } To summarize, when n is a mutable value of the T type, then only one mutable reference to it (of the &mut T type) can exist at any time. Through this reference, the value can be changed. Using ref in a match If you want to get a reference to a matched variable inside a match function, use the ref keyword, as shown in the following example: fn main() { let n = 42; match n {      ref r => println!("Got a reference to {}", r), } let mut m = 42; match m {      ref mut mr => {        println!("Got a mutable reference to {}", mr);        *mr = 43;      }, } println!("m has changed to {}!", m); } Which prints out: Got a reference to 42 Got a mutable reference to 42 m has changed to 43! The r variable inside the match has the &i32 type. In other words, the ref keyword creates a reference for use in the pattern. If you need a mutable reference, use ref mut. We can also use ref to get a reference to a field of a struct or tuple in a destructuring via a let binding. For example, while reusing the Magician struct, we can extract the name of mag by using ref and then return it from the match: let mag = Magician { name: "Gandalf", power: 4625}; let name = {    let Magician { name: ref ref_to_name, power: _ } = mag;    *ref_to_name }; println!("The magician's name is {}", name); Which prints: The magician's name is Gandalf. References are the most common pointer type and have the most possibilities; other pointer types should only be applied in very specific use cases. Summary In this article, we learned the intelligence behind the Rust compiler, which is embodied in the principles of ownership, moving values, and borrowing. Resources for Article: Further resources on this subject: Getting Started with NW.js [article] Creating Random Insults [article] Creating Man-made Materials in Blender 2.5 [article]
Read more
  • 0
  • 0
  • 2736

article-image-introducing-primefaces
Packt
02 Jun 2015
21 min read
Save for later

Introducing PrimeFaces

Packt
02 Jun 2015
21 min read
In this article by Mert Çalışkan and Oleg Varaksin, author of PrimeFaces Cookbook - Second Edition, we will cover the following recipes: Setting up and configuring the PrimeFaces library AJAX basics with process and update PrimeFaces selectors Internationalization (i18n) and Localization (L10n) This article will provide details on the setup and configuration of PrimeFaces, along with the basics of the PrimeFaces AJAX mechanism. The goal of this article is to provide a sneak preview of some of the features of PrimeFaces, such as the AJAX processing mechanism and Internationalization, and Localization. (For more resources related to this topic, see here.) Setting up and configuring the PrimeFaces library PrimeFaces is a lightweight JSF component library with one JAR file, which needs no configuration and does not contain any required external dependencies. To start with the development of the library, all we need is the artifact for the library. Getting ready You can download the PrimeFaces library from http://primefaces.org/downloads.html, and you need to add the primefaces-{version}.jar file to your classpath. After that, all you need to do is import the namespace of the library that is necessary to add the PrimeFaces components to your pages to get started. If you are using Maven (for more information on installing Maven, please visit http://maven.apache.org/guides/getting-started/maven-in-five-minutes.html), you can retrieve the PrimeFaces library by defining the Maven repository in your Project Object Model XML file, pom.xml, as follows: <repository> <id>prime-repo</id> <name>PrimeFaces Maven Repository</name> <url>http://repository.primefaces.org</url> </repository> Add the dependency configuration as follows: <dependency> <groupId>org.primefaces</groupId> <artifactId>primefaces</artifactId> <version>5.2</version> </dependency> At the time of writing this article, the latest and most stable version of PrimeFaces was 5.2. To check whether this is the latest available version or not, please visit http://primefaces.org/downloads.html. The code in this article will work properly with PrimeFaces 5.2. In prior versions or the future versions, some methods, attributes, or components' behaviors may change. How to do it… In order to use PrimeFaces components, first we need to add the namespace declaration to our pages. The namespace for PrimeFaces components is as follows: That is all there is to it. Note that the p prefix is just a symbolic link, and any other character can be used to define the PrimeFaces components. Now you can create your first XHTML page with a PrimeFaces component, as shown in the following code snippet: <html > <f:view contentType="text/html"> <h:head /> <h:body> <h:form> <p:spinner /> </h:form> </h:body> </f:view> </html> This will render a spinner component with an empty value, as shown in the following screenshot: A link to the working example for the given page is given at the end of this recipe. How it works… When the page is requested, the p:spinner component is rendered with the SpinnerRenderer class implemented by the PrimeFaces library. Since the spinner component is an input component, the request-processing life cycle will get executed when the user inputs data and performs a post back on the page. For the first page, we also needed to provide the contentType parameter for f:view since WebKit-based browsers, such as Google Chrome and Safari, request for the content type application/xhtml+xml by default. This would overcome unexpected layout and styling issues that might occur. There's more… PrimeFaces only requires a Java 5+ runtime and a JSF 2.x implementation as mandatory dependencies. There are some optional libraries for certain features. All of these are listed in this table: Dependency Version Type Description JSF runtime 2.0, 2.1, or 2.2 Required Apache MyFaces or Oracle Mojarra itext 2.1.7 Optional DataExporter (PDF) apache-poi 3.7 Optional DataExporter (Excel) rome 1.0 Optional FeedReader commons-fileupload 1.3 Optional FileUpload commons-io 2.2 Optional FileUpload atmosphere 2.2.2 Optional PrimeFaces Push barcode4j-light 2.1 Optional Barcode Generation qrgen 1.4 Optional QR code support for barcode hazelcast 2.6.5+ Optional Integration of the <p:cache> component with hazelcast ehcache 2.7.4+ Optional Integration of the <p:cache> component with ehcache Please ensure that you have only one JAR file of PrimeFaces or a specific PrimeFaces theme in your classpath in order to avoid any issues regarding resource rendering. Currently, PrimeFaces fully supports nonlegacy web browsers with Internet Explorer 10, Safari, Firefox, Chrome, and Opera. The PrimeFaces Cookbook Showcase application This recipe is available in the demo web application on GitHub (https://github.com/ova2/primefaces-cookbook/tree/second-edition). Clone the project if you have not done it yet, explore the project structure, and build and deploy the WAR file on application servers compatible with Servlet 3.x, such as JBoss WildFly and Apache TomEE. The showcase for the recipe is available under http://localhost:8080/pf-cookbook/views/chapter1/yourFirstPage.jsf. AJAX basics with process and update PrimeFaces provides Partial Page Rendering (PPR) and the view-processing feature based on standard JSF 2 APIs to enable choosing what to process in the JSF life cycle and what to render in the end with AJAX. PrimeFaces AJAX Framework is based on standard server-side APIs of JSF 2. On the client side, rather than using the client-side API implementations of JSF, such as Mojarra or MyFaces, PrimeFaces scripts are based on the jQuery JavaScript library, which is well tested and widely adopted. How to do it... We can create a simple page with a command button to update a string property with the current time in milliseconds that is created on the server side and output text to show the value of that string property, as follows: <p:commandButton update="display" action="#{basicPPRBean.updateValue}" value="Update" /> <h:outputText id="display" value="#{basicPPRBean.value}"/> If we want to update multiple components with the same trigger mechanism, we can provide the ID's of the components to the update attribute by providing them with a space, comma, or both, as follows: <p:commandButton update="display1,display2" /> <p:commandButton update="display1 display2" /> <p:commandButton update="display1,display2 display3" /> In addition, there are reserved keywords that are used for a partial update. We can also make use of these keywords along with the ID's of the components, as described in the following table. Some of them come with the JSF standard, and PrimeFaces extends this list with custom keywords. Here's the table we talked about: Keyword JSF/PrimeFaces Description @this JSF The component that triggers the PPR is updated @form JSF The encapsulating form of the PPR trigger is updated @none JSF PPR does not change the DOM with an AJAX response @all JSF The whole document is updated as in non-AJAX requests @parent PrimeFaces The parent of the PPR trigger is updated @composite PrimeFaces This is the closest composite component ancestor @namingcontainer PrimeFaces This is the closest naming container ancestor of the current component @next PrimeFaces This is the next sibling @previous PrimeFaces This is the previous sibling @child(n) PrimeFaces This is the nth child @widgetVar(name) PrimeFaces This is a component stated with a given widget variable name The keywords are a server-side part of the PrimeFaces Search Expression Framework (SEF), which provides both server-side and client-side extensions to make it easier to reference components. We can also update a component that resides in a different naming container from the component that triggers the update. In order to achieve this, we need to specify the absolute component identifier of the component that needs to be updated. An example of this could be the following: <h:form id="form1"> <p:commandButton update=":form2:display" action="#{basicPPRBean.updateValue}" value="Update"/> </h:form> <h:form id="form2"> <h:outputText id="display" value="#{basicPPRBean.value}"/> </h:form> @Named @ViewScoped public class BasicPPRBean implements Serializable { private String value; public String updateValue() { value = String.valueOf(System.currentTimeMillis()); return null; } // getter / setter } PrimeFaces also provides partial processing, which executes the JSF life cycle phases—apply request values, process validations, update model, and invoke application—for determined components with the process attribute. This provides the ability to do group validation on the JSF pages easily. Mostly group validation needs arise in situations where different values need to be validated in the same form, depending on an action that gets executed. By grouping components for validation, errors that would arise from other components when the page has been submitted can be overcome easily. Components such as commandButton, commandLink, autoComplete, fileUpload, and many others provide this attribute to process partially instead of processing the whole view. Partial processing could become very handy in cases where a drop-down list needs to be populated upon a selection on another dropdown and where there is an input field on the page with the required attribute set to true. This approach also makes immediate subforms and regions obsolete. It will also prevent submission of the whole page; thus, this will result in lightweight requests. Without partially processing the view for the dropdowns, a selection on one of the dropdowns will result in a validation error on the required field. A working example for this is shown in the following code snippet: <h:outputText value="Country: " /> <h:selectOneMenu id="countries" value="#{partialProcessing Bean.country}"> <f:selectItems value="#{partialProcessingBean.countries}" /> <p:ajax listener= "#{partialProcessingBean.handleCountryChange}" event="change" update="cities" process="@this"/> </h:selectOneMenu> <h:outputText value="City: " /> <h:selectOneMenu id="cities" value="#{partialProcessingBean.city}"> <f:selectItems value="#{partialProcessingBean.cities}" /> </h:selectOneMenu> <h:outputText value="Email: " /> <h:inputText value="#{partialProcessingBean.email}" required="true" /> With this partial processing mechanism, when a user changes the country, the cities of that country will be populated in the dropdown regardless of whether any input exists for the email field or not. How it works… As illustrated in the partial processing example to update a component in a different naming container, <p:commandButton> is updating the <h:outputText> component that has the display ID and the :form2:display absolute client ID, which is the search expression for the findComponent method. An absolute client ID starts with the separator character of the naming container, which is : by default. The <h:form>, <h:dataTable>, and composite JSF components, along with <p:tabView>, <p:accordionPanel>, <p:dataTable>, <p:dataGrid>, <p:dataList>, <p:carousel>, <p:galleria>, <p:ring>, <p:sheet>, and <p:subTable> are the components that implement the NamingContainer interface. The findComponent method, which is described at http://docs.oracle.com/javaee/7/api/javax/faces/component/UIComponent.html, is used by both JSF core implementation and PrimeFaces. There's more… JSF uses : (colon) as the separator for the NamingContainer interface. The client IDs that will be rendered in the source page will be of the kind id1:id2:id3. If needed, the configuration of the separator can be changed for the web application to something other than the colon with a context parameter in the web.xml file of the web application, as follows: <context-param> <param-name>javax.faces.SEPARATOR_CHAR</param-name> <param-value>_</param-value> </context-param> It's also possible to escape the : character, if needed, in the CSS files with the character, as :. The problem that might occur with the colon is that it's a reserved keyword for the CSS and JavaScript frameworks, like jQuery, so it might need to be escaped. The PrimeFaces Cookbook Showcase application This recipe is available in the demo web application on GitHub (https://github.com/ova2/primefaces-cookbook/tree/second-edition). Clone the project if you have not done it yet, explore the project structure, and build and deploy the WAR file on application servers compatible with Servlet 3.x, such as JBoss WildFly and Apache TomEE. For the demos of this recipe, refer to the following: Basic Partial Page Rendering is available at http://localhost:8080/pf-cookbook/views/chapter1/basicPPR.jsf Updating Component in a Different Naming Container is available at http://localhost:8080/pf-cookbook/views/chapter1/componentInDifferentNamingContainer.jsf An example of Partial Processing is available at http://localhost:8080/pf-cookbook/views/chapter1/partialProcessing.jsf PrimeFaces selectors PrimeFaces integrates the jQuery Selector API (http://api.jquery.com/category/selectors) with the JSF component-referencing model. Partial processing and updating of the JSF components can be done using the jQuery Selector API instead of a regular server-side approach with findComponent(). This feature is called the PrimeFaces Selector (PFS) API. PFS provides an alternative, flexible approach to reference components to be processed or updated partially. PFS is a client-side part of the PrimeFaces SEF, which provides both server-side and client-side extensions to make it easier to reference components. In comparison with regular referencing, there is less CPU server load because the JSF component tree is not traversed on the server side in order to find client IDs. PFS is implemented on the client side by looking at the DOM tree. Another advantage is avoiding container limitations, and thus the cannot find component exception—since the component we were looking for was in a different naming container. The essential advantage of this feature, however, is speed. If we reference a component by an ID, jQuery uses document.getElementById(), a native browser call behind the scene. This is a very fast call, much faster than that on the server side with findComponent(). The second use case, where selectors are faster, is when we have a lot of components with the rendered attributes set to true or false. The JSF component tree is very big in this case, and the findComponent() call is time consuming. On the client side, only the visible part of the component tree is rendered as markup. The DOM is smaller than the component tree and its selectors work faster. In this recipe, we will learn PFS in detail. PFS is recognized when we use @(...) in the process or update attribute of AJAX-ified components. We will use this syntax in four command buttons to reference the parts of the page we are interested in. How to do it… The following code snippet contains two p:panel tags with the input, select, and checkbox components respectively. The first p:commandButton component processes/updates all components in the form(s). The second one processes / updates all panels. The third one processes input, but not select components, and updates all panels. The last button only processes the checkbox components in the second panel and updates the entire panel. <p:messages id="messages" autoUpdate="true"/> <p:panel id="panel1" header="First panel"> <h:panelGrid columns="2"> <p:outputLabel for="name" value="Name"/> <p:inputText id="name" required="true"/> <p:outputLabel for="food" value="Favorite food"/> <h:selectOneMenu id="food" required="true"> ... </h:selectOneMenu> <p:outputLabel for="married" value="Married?"/> <p:selectBooleanCheckbox id="married" required="true" label="Married?"> <f:validator validatorId="org.primefaces.cookbook. validator.RequiredCheckboxValidator"/> </p:selectBooleanCheckbox> </h:panelGrid> </p:panel> <p:panel id="panel2" header="Second panel"> <h:panelGrid columns="2"> <p:outputLabel for="address" value="Address"/> <p:inputText id="address" required="true"/> <p:outputLabel for="pet" value="Favorite pet"/> <h:selectOneMenu id="pet" required="true"> ... </h:selectOneMenu> <p:outputLabel for="gender" value="Male?"/> <p:selectBooleanCheckbox id="gender" required="true" label="Male?"> <f:validator validatorId="org.primefaces.cookbook. validator.RequiredCheckboxValidator"/> </p:selectBooleanCheckbox> </h:panelGrid> </p:panel> <h:panelGrid columns="5" style="margin-top:20px;"> <p:commandButton process="@(form)" update="@(form)" value="Process and update all in form"/> <p:commandButton process="@(.ui-panel)" update="@(.ui-panel)" value="Process and update all panels"/> <p:commandButton process="@(.ui-panel :input:not(select))" update="@(.ui-panel)" value="Process inputs except selects in all panels"/> <p:commandButton process="@(#panel2 :checkbox)" update="@(#panel2)" value="Process checkboxes in second panel"/> </h:panelGrid> In terms of jQuery selectors, regular input field, selection, and checkbox controls are all inputs. They can be selected by the :input selector. The following screenshot shows what happens when the third button is pushed. The p:inputText and p:selectBooleanCheckbox components are marked as invalid. The h:selectOneMenu component is not marked as invalid although no value was selected by the user. How it works… The first selector from the @(form) first button selects all forms on the page. The second selector, @(.ui-panel), selects all panels on the page as every main container of PrimeFaces' p:panel component has this style class. Component style classes are usually documented in the Skinning section in PrimeFaces User's Guide (http://www.primefaces.org/documentation.html). The third selector, @(.ui-panel :input:not(select)), only selects p:inputText and p:selectBooleanCheckbox within p:panel. This is why h:selectOneMenu was not marked as invalid in the preceding screenshot. The validation of this component was skipped because it renders itself as an HTML select element. The last selector variant, @(#panel2 :checkbox), intends to select p:selectBooleanCheckbox in the second panel only. In general, it is recommended that you use Firebug (https://getfirebug.com) or a similar browser add-on to explore the generated HTML structure when using jQuery selectors. A common use case is skipping validation for the hidden fields. Developers often hide some form components dynamically with JavaScript. Hidden components get validated anyway, and the form validation can fail if the fields are required or have other validation constraints. The first solution would be to disable the components (in addition to hiding them). The values of disabled fields are not sent to the server. The second solution would be to use jQuery's :visible selector in the process attribute of a command component that submits the form. There's more… PFS can be combined with regular component referencing as well, for example, update="compId1 :form:compId2 @(.ui-tabs :input)". The PrimeFaces Cookbook Showcase application This recipe is available in the demo web application on GitHub (https://github.com/ova2/primefaces-cookbook/tree/second-edition). Clone the project if you have not done it yet, explore the project structure, and build and deploy the WAR file on application servers compatible with Servlet 3.x, such as JBoss WildFly and Apache TomEE. The showcase for the recipe is available at http://localhost:8080/pf-cookbook/views/chapter1/pfs.jsf. Internationalization (i18n) and Localization (L10n) Internationalization (i18n) and Localization (L10n) are two important features that should be provided in the web application's world to make it accessible globally. With Internationalization, we are emphasizing that the web application should support multiple languages, and with Localization, we are stating that the text, date, or other fields should be presented in a form specific to a region. PrimeFaces only provides English translations. Translations for other languages should be provided explicitly. In the following sections, you will find details on how to achieve this. Getting ready For internationalization, first we need to specify the resource bundle definition under the application tag in faces-config.xml, as follows: <application> <locale-config> <default-locale>en</default-locale> <supported-locale>tr_TR</supported-locale> </locale-config> <resource-bundle> <base-name>messages</base-name> <var>msg</var> </resource-bundle> </application> A resource bundle is a text file with the .properties suffix that would contain locale-specific messages. So, the preceding definition states that the resource bundle messages_{localekey}.properties file will reside under classpath, and the default value of localekey is en, which stands for English, and the supported locale is tr_TR, which stands for Turkish. For projects structured by Maven, the messages_{localekey}.properties file can be created under the src/main/resources project path. The following image was made in the IntelliJ IDEA: How to do it… To showcase Internationalization, we will broadcast an information message via the FacesMessage mechanism that will be displayed in PrimeFaces' growl component. We need two components—growl itself and a command button—to broadcast the message: <p:growl id="growl" /> <p:commandButton action="#{localizationBean.addMessage}" value="Display Message" update="growl" /> The addMessage method of localizationBean is as follows: public String addMessage() { addInfoMessage("broadcast.message"); return null; } The preceding code uses the addInfoMessage method, which is defined in the static MessageUtil class as follows: public static void addInfoMessage(String str) { FacesContext context = FacesContext.getCurrentInstance(); ResourceBundle bundle = context.getApplication().getResourceBundle(context, "msg"); String message = bundle.getString(str); FacesContext.getCurrentInstance().addMessage(null, new FacesMessage(FacesMessage.SEVERITY_INFO, message, "")); } Localization of components, such as calendar and schedule, can be achieved by providing the locale attribute. By default, locale information is retrieved from the view's locale, and it can be overridden by a string locale key or with a java.util.Locale instance. Components such as calendar and schedule use a shared PrimeFaces.locales property to display labels. As stated before, PrimeFaces only provides English translations, so in order to localize the calendar, we need to put the corresponding locales into a JavaScript file and include the scripting file to the page. The content for the German locale of the Primefaces.locales property for calendar would be as shown in the following code snippet. For the sake of the recipe, only the German locale definition is given and the Turkish locale definition is omitted; you can find it in the showcase application Here's the code snippet we talked about: PrimeFaces.locales['de'] = { closeText: 'Schließen', prevText: 'Zurück', nextText: 'Weiter', monthNames: ['Januar', 'Februar', 'März', 'April', 'Mai', 'Juni', 'Juli', 'August', 'September', 'Oktober', 'November', 'Dezember'], monthNamesShort: ['Jan', 'Feb', 'Mär', 'Apr', 'Mai', 'Jun', 'Jul', 'Aug', 'Sep', 'Okt', 'Nov', 'Dez'], dayNames: ['Sonntag', 'Montag', 'Dienstag', 'Mittwoch', 'Donnerstag', 'Freitag', 'Samstag'], dayNamesShort: ['Son', 'Mon', 'Die', 'Mit', 'Don', 'Fre', 'Sam'], dayNamesMin: ['S', 'M', 'D', 'M ', 'D', 'F ', 'S'], weekHeader: 'Woche', FirstDay: 1, isRTL: false, showMonthAfterYear: false, yearSuffix: '', timeOnlyTitle: 'Nur Zeit', timeText: 'Zeit', hourText: 'Stunde', minuteText: 'Minute', secondText: 'Sekunde', currentText: 'Aktuelles Datum', ampm: false, month: 'Monat', week: 'Woche', day: 'Tag', allDayText: 'Ganzer Tag' }; The definition of the calendar components both with and without the locale attribute would be as follows: <p:calendar showButtonPanel="true" navigator="true" mode="inline" id="enCal"/> <p:calendar locale="tr" showButtonPanel="true" navigator="true" mode="inline" id="trCal"/> <p:calendar locale="de" showButtonPanel="true" navigator="true" mode="inline" id="deCal"/> They will be rendered as follows: How it works… For Internationalization of the PrimeFaces message, the addInfoMessage method retrieves the message bundle via the defined msg variable. It then gets the string from the bundle with the given key by invoking the bundle.getString(str) method. Finally, the message is added by creating a new PrimeFaces message with the FacesMessage.SEVERITY_INFO severity level. There's more… For some components, localization could be accomplished by providing labels to the components via attributes, such as with p:selectBooleanButton: <p:selectBooleanButton value="#{localizationBean.selectedValue}" onLabel="#{msg['booleanButton.onLabel']}" offLabel="#{msg['booleanButton.offLabel']}" /> The msg variable is the resource bundle variable that is defined in the resource bundle definition in the PrimeFaces configuration file. The English version of the bundle key definitions in the messages_en.properties file that resides under the classpath would be as follows: booleanButton.onLabel=Yes booleanButton.offLabel=No The PrimeFaces Cookbook Showcase application This recipe is available in the demo web application on GitHub (https://github.com/ova2/primefaces-cookbook/tree/second-edition). Clone the project if you have not done it yet, explore the project structure, and build and deploy the WAR file on application servers compatible with Servlet 3.x, such as JBoss WildFly and Apache TomEE. For the demos of this recipe, refer to the following: Internationalization is available at http://localhost:8080/pf-cookbook/views/chapter1/internationalization.jsf Localization of the calendar component is available at http://localhost:8080/pf-cookbook/views/chapter1/localization.jsf Localization with resources is available at http://localhost:8080/pf-cookbook/views/chapter1/localizationWithResources.jsf For already translated locales of the calendar, see http://code.google.com/p/primefaces/wiki/PrimeFacesLocales. Summary In this article, we learned about setting up and configuring the PrimeFaces library, AJAX basics with process and update, PrimeFaces selectors, and Internationalization (i18n) and Localization (L10n). Resources for Article: Further resources on this subject: Components of PrimeFaces Extensions [Article] JSF2 composite component with PrimeFaces [Article] Getting Started with PrimeFaces [Article]
Read more
  • 0
  • 0
  • 5934
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-installing-and-configuring-network-monitoring-software
Packt
02 Jun 2015
9 min read
Save for later

Installing and Configuring Network Monitoring Software

Packt
02 Jun 2015
9 min read
This article written by Bill Pretty, Glenn Vander Veer, authors of the book Building Networks and Servers Using BeagleBone will serve as an installation guide for the software that will be used to monitor the traffic on your local network. These utilities can help determine which devices on your network are hogging the bandwidth, which slows down the network for other devices on your network. Here are the topics that we are going to cover: Installing traceroute and My Trace Route (MTR or Matt's Traceroute): These utilities will give you a real-time view of the connection between one node and another Installing Nmap: This utility is a network scanner that can list all the hosts on your network and all the services available on those hosts Installing iptraf-ng: This utility gathers various network traffic information and statistics (For more resources related to this topic, see here.) Installing Traceroute Traceroute is a tool that can show the path from one node on a network to another. This can help determine the ideal placement of a router to maximize wireless bandwidth in order to stream music and videos from the BeagleBone server to remote devices. Traceroute can be installed with the following command: apt-get install traceroute   Once Traceroute is installed, it can be run to find the path from the BeagleBone to any server anywhere in the world. For example, here's the route from my BeagelBone to the Canadian Google servers: Now, it is time to decipher all the information that is presented. This first command line tells traceroute the parameters that it must use: traceroute to google.ca (74.125.225.23), 30 hops max, 60 byte packets This gives the hostname, the IP address returned by the DNS server, the maximum number of hops to be taken, and the size of the data packet to be sent. The maximum number of hops can be changed with the –m flag and can be up to 255. In the context of this book, this will not have to be changed. After the first line, the next few lines show the trip from the BeagleBone, through the intermediate hosts (or hops), to the Google.ca server. Each line follows the following format: hop_number host_name (host IP_address) packet_round_trip_times From the command that was run previously (specifically hop number 4): 2 10.149.206.1 (10.149.206.1) 15.335 ms 17.319 ms 17.232 ms Here's a breakdown of the output: The hop number 2: This is a count of the number of hosts between this host and the originating host. The higher the number, the greater is the number of computers that the traffic has to go through to reach its destination. 10.149.206.1: This denotes the hostname. This is the result of a reverse DNS lookup on the IP address. If no information is returned from the DNS query (as in this case), the IP address of the host is given instead. (10.149.206.1): This is the actual host IP address. Various numbers: This is the round-trip time for a packet to go from the BeagleBone to the server and back again. These numbers will vary depending on network traffic, and lower is better. Sometimes, the traceroute will return some asterisks (*). This indicates that the packet has not been acknowledged by the host. If there are consecutive asterisks and the final destination is not reached, then there may be a routing problem. In a local network trace, it most likely is a firewall that is blocking the data packet. Installing My Traceroute My Traceroute (MTR) is an extension of traceroute, which probes the routers on the path from the packet source and destination, and keeps track of the response times of the hops. It does this repeatedly so that the response times can be averaged. Now, install mtr with the following command: sudo apt-get install mtr After it is run, mtr will provide quite a bit more information to look at, which would look like the following: While the output may look similar, the big advantage over traceroute is that the output is constantly updated. This allows you to accumulate trends and averages and also see how network performance varies over time. When using traceroute, there is a possibility that the packets that were sent to each hop happened to make the trip without incident, even in a situation where the route is suffering from intermittent packet loss. The mtr utility allows you to monitor this by gathering data over a wider range of time. Here's an mtr trace from my Beaglebone to my Android smartphone: Here's another trace, after I changed the orientation of the antennae of my router: As you can see, the original orientation was almost 100 milliseconds faster for ping traffic. Installing Nmap Nmap is designed to allow the scanning of networks in order to determine which hosts are up and what services are they offering. Nmap supports a large number of scanning options, which are overkill for what will be done in this book. Nmap is installed with the following command: sudo apt-get install nmap Answer Yes to install nmap and its dependent packages. Using Nmap After it is installed, run the following command to see all the hosts that are currently on the network: nmap –T4 –F <your_local_ip_range> The option -T4 sets the timing template to be used, and the -F option is for fast scanning. There are other options that can be used and found via the nmap manpage. Here, your_local_ip_range is within the range of addresses assigned by your router. Here's a node scan of my local network. If you have a lot of devices on your local network, this command may take a long time to complete. Now, I know that I have more nodes on my network, but they don't show up. This is because the command we ran didn't tell nmap to explicitly query each IP address to see whether the host responds but to query common ports that may be open to traffic. Instead, only use the -Pn option in the command to tell nmap to scan all the ports for every address in the range. This will scan more ports on each address to determine whether the host is active or not. Here, we can see that there are definitely more hosts registered in the router device table. This scan will attempt to scan a host IP address even if the device is powered off. Resetting the router and running the same scan will scan the same address range, but it will not return any device names for devices that are not powered at the time of the scan. You will notice that after scanning, nmap reports that some IP addresses' ports are closed and some are filtered. Closed ports are usually maintained on the addresses of devices that are locked down by their firewall. Filtered ports are on the addresses that will be handled by the router because there actually isn't a node assigned to these addresses. Here's a part of the output from an nmap scan of my Windows machine: Here's a part of the output of a scan of the BeagleBone: Installing iptraf-ng Iptraf-ng is a utility that monitors traffic on any of the interfaces or IP addresses on your network via custom filters. Because iptraf-ng is based on the ncurses libraries, we will have to install them first before downloading and compiling the actual iptraf-ng package. To install ncurses, run the following command: sudo apt-get install libncurses5-dev Here's how you will install ncurses and its dependent packages: Once ncurses is installed, download and extract the iptraf-ng tarball so that it can be built. At the time of writing this book, iptrf-ng's version 1.1.4 was available. This will change over time, and a quick search on Google will give you the latest and greatest version to download. You can download this version with the following command: wget https://fedorahosted.org/releases/i/p/iptraf-ng/iptraf-ng- <current_version_number>.tar.gz The following screenshot shows how to download the iptraf-ng tarball: After we have completed the downloading, extract the tarball using the following command: tar –xzf iptraf-ng-<current_version_number>.tar.gz Navigate to the iptraf-ng directory created by the tar command and issue the following commands: ./configure make sudo make install After these commands are complete, iptraf-ng is ready to run, using the following command: sudo iptraf-ng When the program starts, you will be presented with the following screen: Configuring iptraf-ng As an example, we are going to monitor all incoming traffic to the BeagleBone. In order to do this, iptraf-ng should be configured. Selecting the Configure... menu item will show you the following screen: Here, settings can be changed by highlighting an option in the left-hand side window and pressing Enter to select a new value, which will be shown in the Current Settings window. In this case, I have enabled all the options except Logging. Exit the configuration screen and enter the Filter Status screen. This is where we will set up the filter to only monitor traffic coming to the BeagleBone and from it. Then, the following screen will be presented: Selecting IP... will create an IP filter, and the following subscreen will pop up: Selecting Define new filter... will allow the creation and saving of a filter that will only display traffic for the IP address and the IP protocols that are selected, as shown in the following screenshot: Here, I have put in the BeagleBone's IP address, and to match all IP protocols. Once saved, return to the main menu and select IP traffic monitor. Here, you will be able to select the network interfaces to be monitored. Because my BeagleBone is connected to my wired network, I have selected eth0. The following screenshot should shows us the options: If all went well with your filter, you should see traffic to your BeagleBone and from it. Here are the entries for my PuTTy session; 192.168.17.2 is my Windows 8 machine, and 192.168.17.15 is my BeagleBone: Here's an image of the traffic generated by browsing the DLNA server from the Windows Explorer: Moreover, here's the traffic from my Android smartphone running a DLNA player, browsing the shared directories that were set up: Summary In this article, you saw how to install and configure the software that will be used to monitor the traffic on your local network. With these programs and a bit of experience, you can determine which devices on your network are hogging the bandwidth and find out whether you have any unauthorized users. Resources for Article: Further resources on this subject: Learning BeagleBone [article] Protecting GPG Keys in BeagleBone [article] Home Security by BeagleBone [article]
Read more
  • 0
  • 0
  • 16881

article-image-truly-software-defined-policy-based-management
Packt
02 Jun 2015
14 min read
Save for later

Truly Software-defined, Policy-based Management

Packt
02 Jun 2015
14 min read
In this article, written by Cedric Rajendran, author of the book Getting Started with VMware Virtual SAN, we will discuss one of the key characteristics of Virtual SAN called Storage Policy Based Management (SPBM). Traditionally, storage capabilities are tiered and provisioned. Some of the key attributes for tiering are performance, capacity, and availability. The actual implementation of tiers is performed at the hardware level, governed by physical disk capabilities and RAID configuration. VSAN, however, establishes the capabilities at the software layer through policies. Here we will closely review: Why is SPBM used? Attributes that are configurable through SPBM Understand how SPBM works Overall, we will discuss the various permutations and combinations of the policy-based management of storage, and how this method modernizes storage provisioning and paves the way for being truly software-defined. (For more resources related to this topic, see here.) Why do we need policies? Back in the '90s, Gartner discussed tiered storage architecture with traditional storage arrays. Devices were tiered based on their cost and data on certain factors such as criticality, age, performance, and a few others. This meant that some data made their way to the fastest and most reliable tier and other data into slower and less expensive ones. This tiering was done at the device level, that is, the storage administrator segmented devices based on cost or there was heterogeneous storage presented to servers varying between high-end, mid-end, and low-end arrays. An administrator would then manually provision data on the respective tiers. There have been several advancements with storage arrays automating tiering at the array level. With virtualization however, data seldom static and the ability to move data around through features such as Storage vMotion gave the right level of agility to the vSphere administrators. The flip side of this is that it became very error prone and difficult to maintain compliance. For example, during maintenance tasks, a high I/O intensive virtual machine may be migrated to a low IOPS capable datastore; this would silently lead to a performance issue for the application and overall user experience. Hence, there was a need for a very high degree of control, automation, and a VM-centric approach to satisfy each virtual machine's storage requirements. The solution for this problem is SPBM. With SPBM, we are able to build very granular policies to each VMDK associated to a virtual machine and these policies follow the virtual machine wherever they go. Understanding SPBM An SPBM policy can be thought of as a blueprint or plan that outlines the storage performance, capacity and availability requirement of a virtual machine. The policy is then associated with individual objects (VMDK). These policies are then applied by replicating, distributing and caching the objects. At this juncture, it suffices to understand that objects are parts of a virtual machine; a virtual machine disk (VMDK) and the snapshot delta files are examples of objects. Let's discuss this with an example of RAID 0 concept. In RAID 0, data is striped, that is, data is broken down into blocks and each block is written on a different disk drives/controller in the RAID group so that, cumulative IOPS of all disks in the RAID group are efficiently used, and this in turn increases the performance. Similarly, we can define a policy with SPBM for an object (VMDK) that will stripe the object across a VSAN datastore. It is mandatory for each virtual machine that is to be deployed on a VSAN datastore to be associated with a policy. If one has not been defined, a default, predefined policy will be applied. In a nutshell, the capabilities of VSAN datastore will be abstracted and presented in such a way that an object can distinctly be placed adhering to very specific needs of the specific object. All this, while another virtual machines' objects resid on the same VSAN datastore, can have a totally different set of capabilities. An important component that enables this abstraction is vStorage APIs for Storage Awareness (VASA); more details on VASA are discussed at the end of this article. The communication workflow is as follows: Define the capabilities required for a VM in a storage policy in vCenter Policy information is cascaded to VSAN through VASA VASA assesses whether VSAN can accommodate the capability requirement and reports compliance on a per-storage object basis Let's understand this concept with a simple example. Consider a fileserver virtual machine that comprises of two VMDKs or objects, one of which is for the OS and the other where the actual data is being read from or written to by several users. The OS VMDK requires lower IOPS capability, while the other VMDK is very I/O intensive and requires a significantly faster disk. The application team that maintains this server demands this workload to be placed in a tier 1 datastore, which in turn translates to a LUN from a mid-range or high-end array, the cost of which obviously is rather high. A vSphere administrator can argue that the OS VMDK can be part of a tier 2 or tier 3 VMFS datastore that is less expensive, whereas the database VMDK can be placed on a Tier 1 datastore to meet the business SLAs for storage optimization. While this is theoretically achievable, in reality it possesses significant administrative overheads and a serious sore-point if there are any failures in the datastore where the files reside. Troubleshooting and restoring the VM to the running state will be quite a cumbersome and time-consuming task. Now imagine if a policy is able to cater to the storage requirements of this VM, an administrator carves out a policy as per the requirements and associates it to the VM's objects residing on the VSAN datastore. After this one-time effort, the policy ensures that the virtual machine is compliant with the demands of the application team throughout its lifecycle. Another interesting and useful feature of SPBM is that during the lifecycle of the virtual machine, the administrator can amend the policies and reapply without disruption or downtime. To summarize, with Storage Policy Based Management, the virtual machine deployment is tied to the Virtual SAN capabilities and thereby removes the administrative overhead and complication associated with manually building this setup. VSAN datastore capabilities VSAN datastore capabilities help define the performance, availability, reliability, and the capabilities indirectly governing the capacity consumed by an object. Let's dive into the specific capabilities that can be abstracted and managed. The following is a list of capabilities that can be defined on a VSAN datastore: Number of disk stripes per object Number of failures to tolerate Flash read cache reservation Force provisioning Object space reservation Accessing the VSAN datastore capabilities We can access these capabilities through the vSphere web client as described in the following steps and screenshots: Connect to the vCenter server through the vSphere web client. Navigate to Home | VM Storage Policies, as shown here: Choose VSAN from the dropdown for Rules based on vendor specific capabilities. Create (or edit) a VM storage policy, as shown in the following screenshot: Define Rule Set of the policy describing the storage requirements of an object. Review the configuration settings and click on Finish. Number of disk stripes per object This capability simulates the traditional RAID 0 concept by defining the number of physical disks across which each replica of a storage object is striped. In a typical RAID 0, this means that there is concurrent and parallel I/O running into multiple disks. However, in the context of VSAN, this raises a few questions. Consider this typical scenario: A disk group can have a maximum of one SSD All I/O read cache and write buffer are routed first to SSD I/O is then destaged from SSD to magnetic disks How will having more stripes improve performance if SSD intercepts all I/O? The answer to this question is that it depends, and cannot be administratively controlled. However, at a high level, performance improvement can be witnessed. If the structure of the object is spread across magnetic disks from different hosts in the cluster, then multiple SSDs and magnetic disks will be used. This is very similar to the traditional RAID 0. Another influencing factor is how I/O moves from SSD to magnetic disks. Number of disk stripes per object is by default 1. There can be a maximum of 12 stripes per object. Number of failures to tolerate This capability defines the availability requirements of the object. In this context, the nature of failure can be at host, network, and disk level in the cluster. Based on the value defined for the number of failures to tolerate (n), there are n+1 replicas that are built to sustain n failures. It is important to understand that the object can sustain n concurrent failures, that is, all permutations and combinations of host, network, and/or disk-level failures can be sustained until n failures. This is similar to a RAID 1 mirroring concept, albeit replicas are placed on different hosts. Number of failures to tolerate is by default set to 1. We can have a maximum value of 3. Scenario based examples Outlined here are three scenarios demonstrating the placements of components of an object. Note that objects are of four types. For easier understanding, we will discuss scenarios based on the VMDK object. We'll sample VMDK since these are the most sensitive and relevant in the context of objects on the VSAN datastore. In addition, these are some illustrations of how VSAN may place the objects by adhering to the policies defined, and this may vary depending on resource availability and layout specific to each deployment. Scenario 1 Number of failures to tolerate is equal to 1. In the first scenario, we have crafted a simple policy to tolerate one failure. The virtual machine objects are expected to have a mirrored copy and the objective is to eliminate a single point of failure. The typical use for this policy is an operating system VMDK: Scenario 2 Number of failures to tolerate is equal to 1. Number of disk stripes per object is equal to 2. In this scenario, we increase the stripe width of the object, while keeping the failure tolerance left at 1. The objective here is to improve the performance as well as ensure that there is no single point of failure. The expected layout is as shown here; the object is mirrored and striped: Scenario 3 Number of failures to tolerate is equal to 2. Number of disk stripes per object is equal to 2. Extending from the preceding scenario, we increase the failure tolerance level to 2. Effectively, two mirrors can fail, so the layout will expand as illustrated in the following diagram. Note that to facilitate n failures, you would need 2n+1 nodes. An administrator can validate the actual physical disk placement of the components, that is, the parts that make up the object from the virtual machines' Manage tab from the vSphere web client. Navigate to VM | Manage | VM Storage Policies: Flash read cache reservation By default, all virtual machine objects based on demand share the read cache available from the flash device that is part of each disk group. However, there may be scenarios wherein specific objects require reserved read cache, typically for a read intensive workload that needs to have the maximum amount of its reads to be serviced by a flash device. In such cases, an administrator can explicitly define a percentage of flash cache to be reserved for the object. The flash read cache reservation capability defines the amount of flash capacity that will be reserved/blocked for the storage object to be used as read cache. The reservation is displayed as a percentage of the object. You can have a minimum of 0 percent and can go up to 100 percent, that is, you can reserve the entire object size on the flash disk for read cache. For purposes of granularity, since the flash device may run into terabytes of capacity, the value for flash cache can be specified up to 4 decimal places; for example, it can be set to 0.0001 percent. As with any reservation concept, blocking resources for one object implies the resource is unavailable for another object. Therefore, unless there is a specific need, this should be left at default and Virtual SAN should be allowed to have control over the allocation. This will ensure adequate capacity distribution between objects. The default value is 0 percent and the maximum value is 100 percent. Force provisioning We create policies to ensure that the storage requirements of a virtual machine object is strictly adhered to. In the event that the VSAN datastore cannot satisfy the storage requirements specified by the policy, the virtual machine will not be provisioned. This capability allows for a strict compliance check. However, it may also become an obstacle when you need to urgently deploy virtual machines but the datastore does not satisfy the storage requirements of the virtual machine. The force provisioning capability allows an administrator to override this behavior. By default, Force Provisioning is set to No. By toggling this setting to Yes, virtual machines can be forcefully provisioned. It is important to understand that an administrator should remediate the constraints that lead to provisioning failing in the first place. It has a boolean value, which is set to No by default. Object space reservation Virtual machines provisioned on Virtual SAN are, by default, provisioned as thin disks. The Object Space Reservation parameter defines the logical size of the storage object or, in other words, whether the specific object should remain thin, partially, or fully allocated. While this is not entirely new and is similar to the traditional practice of either thin provisioning or thick provisioning a VMDK, VSAN provides a greater degree of control by letting the vSphere administrators choose the percentage of disk that should be thick provisioned. The default value is 0 percent and maximum value is 100 percent. Under the hood – SBPM It is important to understand how the abstraction works under the hood in order to surface the Virtual SAN capabilities, which in turn help to create and associate policies to virtual machines. The following section about VASA and managing storage providers is informative, and for better understanding; you may not run into a situation where you need to make any configuration changes to storage providers. vSphere APIs for Storage Awareness To understand VASA better, let's consider a scenario wherein an administrator is deploying a virtual machine on a traditional SAN array. He would need to choose the appropriate datastore to suit the capabilities and requirements of the virtual machine or certain business requirements. For instance, there could be workloads that need to be deployed in a tier 1 LUN. The existing practice is to ensure that the right virtual machine gets deployed on the right datastore; there were rather archaic styles of labelling, or simply asking the administrator the capability of the LUN. Now, replace this methodology with a mechanism to identify the storage capabilities through API. VASA provides such a capability and aids in identifying the specific attributes of the array and passes on these capabilities to vCenter. This implies that a vSphere administrator can have end-to-end visibility through a single management plane of vCenter. Storage DRS, storage health, and capacity monitoring, to name a few, are very useful and effective features implemented through VASA. To facilitate VASA, storage array vendors create plugins called vendor/storage providers. These plugins allow storage vendors to publish the capabilities to vCenter, which in turn surfaces it in the UI. For VMware Virtual SAN, the VSAN storage provider is developed by VMware and built into ESXi hypervisors. By enabling VSAN on a cluster, the plugins get automatically registered with vCenter. The VSAN storage provider surfaces the VSAN datastores' capabilities which in turn is used to create appropriate policies. Managing Virtual SAN storage providers Once Virtual SAN is enabled and storage provider registration is complete, an administrator can verify this through the vSphere web client: Navigate to the vCenter server in the vSphere web client. Click on the Manage tab, and click on Storage Providers. The expected outcome would be to have one VSAN provider online and the remaining storage providers on standby mode. The following screenshot shows a three-node cluster: If the host that currently has the online storage provider fails, another host will bring its provider online. Summary In this article, we discussed the significance of Storage Policy Based Management in detail and how it plays a key factor in defining the storage provisioning at the software layer. We further discussed the VSAN datastore capabilities with scenarios and how it operates under the hood.
Read more
  • 0
  • 0
  • 1177

article-image-integration-chefbot-hardware-and-interfacing-it-ros-using-python
Packt
02 Jun 2015
18 min read
Save for later

Integration of ChefBot Hardware and Interfacing it into ROS, Using Python

Packt
02 Jun 2015
18 min read
In this article by Lentin Joseph, author of the book Learning Robotics Using Python, we will see how to assemble this robot using these parts and also the final interfacing of sensors and other electronics components of this robot to Tiva C LaunchPad. We will also try to interface the necessary robotic components and sensors of ChefBot and program it in such a way that it will receive the values from all sensors and control the information from the PC. Launchpad will send all sensor values via a serial port to the PC and also receive control information (such as reset command, speed, and so on) from the PC. After receiving sensor values from the PC, a ROS Python node will receive the serial values and convert it to ROS Topics. There are Python nodes present in the PC that subscribe to the sensor's data and produces odometry. The data from the wheel encoders and IMU values are combined to calculate the odometry of the robot and detect obstacles by subscribing to the ultrasonic sensor and laser scan also, controlling the speed of the wheel motors by using the PID node. This node converts the linear velocity command to differential wheel velocity. After running these nodes, we can run SLAM to map the area and after running SLAM, we can run the AMCL nodes for localization and autonomous navigation. In the first section of this article, Building ChefBot hardware, we will see how to assemble the ChefBot hardware using its body parts and electronics components. (For more resources related to this topic, see here.) Building ChefBot hardware The first section of the robot that needs to be configured is the base plate. The base plate consists of two motors and its wheels, caster wheels, and base plate supports. The following image shows the top and bottom view of the base plate: Base plate with motors, wheels, and caster wheels The base plate has a radius of 15cm and motors with wheels are mounted on the opposite sides of the plate by cutting a section from the base plate. A rubber caster wheel is mounted on the opposite side of the base plate to give the robot good balance and support for the robot. We can either choose ball caster wheels or rubber caster wheels. The wires of the two motors are taken to the top of the base plate through a hole in the center of the base plate. To extend the layers of the robot, we will put base plate supports to connect the next layers. Now, we can see the next layer with the middle plate and connecting tubes. There are hollow tubes, which connect the base plate and the middle plate. A support is provided on the base plate for hollow tubes. The following figure shows the middle plate and connecting tubes: Middle plate with connecting tubes The connecting tubes will connect the base plate and the middle plate. There are four hollow tubes that connect the base plate to the middle plate. One end of these tubes is hollow, which can fit in the base plate support, and the other end is inserted with a hard plastic with an option to put a screw in the hole. The middle plate has no support except four holes: Fully assembled robot body The middle plate male connector helps to connect the middle plate and the top of the base plate tubes. At the top of the middle plate tubes, we can fit the top plate, which has four supports on the back. We can insert the top plate female connector into the top plate support and this is how we will get the fully assembled body of the robot. The bottom layer of the robot can be used to put the Printed Circuit Board (PCB) and battery. In the middle layer, we can put Kinect and Intel NUC. We can put a speaker and a mic if needed. We can use the top plate to carry food. The following figure shows the PCB prototype of robot; it consists of Tiva C LaunchPad, a motor driver, level shifters, and provisions to connect two motors, ultrasonic, and IMU: ChefBot PCB prototype The board is powered with a 12 V battery placed on the base plate. The two motors can be directly connected to the M1 and M2 male connectors. The NUC PC and Kinect are placed on the middle plate. The Launchpad board and Kinect should be connected to the NUC PC via USB. The PC and Kinect are powered using the same 12 V battery itself. We can use a lead-acid or lithium-polymer battery. Here, we are using a lead-acid cell for testing purposes. We will migrate to lithium-polymer for better performance and better backup. The following figure shows the complete assembled diagram of ChefBot: Fully assembled robot body After assembling all the parts of the robot, we will start working with the robot software. ChefBot's embedded code and ROS packages are available in GitHub. We can clone the code and start working with the software. Configuring ChefBot PC and setting ChefBot ROS packages In ChefBot, we are using Intel's NUC PC to handle the robot sensor data and its processing. After procuring the NUC PC, we have to install Ubuntu 14.04.2 or the latest updates of 14.04 LTS. After the installation of Ubuntu, install complete ROS and its packages. We can configure this PC separately, and after the completion of all the settings, we can put this in to the robot. The following are the procedures to install ChefBot packages on the NUC PC. Clone ChefBot's software packages from GitHub using the following command: $ git clone https://github.com/qboticslabs/Chefbot_ROS_pkg.git We can clone the code in our laptop and copy the chefbot folder to Intel's NUC PC. The chefbot folder consists of the ROS packages of ChefBot. In the NUC PC, create a ROS catkin workspace, copy the chefbot folder and move it inside the src directory of the catkin workspace. Build and install the source code of ChefBot by simply using the following command This should be executed inside the catkin workspace we created: $ catkin_make If all dependencies are properly installed in NUC, then the ChefBot packages will build and install in this system. After setting the ChefBot packages on the NUC PC, we can switch to the embedded code for ChefBot. Now, we can connect all the sensors in Launchpad. After uploading the code in Launchpad, we can again discuss ROS packages and how to run it. Interfacing ChefBot sensors with Tiva C LaunchPad We have discussed interfacing of individual sensors that we are going to use in ChefBot. In this section, we will discuss how to integrate sensors into the Launchpad board. The Energia code to program Tiva C LaunchPad is available on the cloned files at GitHub. The connection diagram of Tiva C LaunchPad with sensors is as follows. From this figure, we get to know how the sensors are interconnected with Launchpad: Sensor interfacing diagram of ChefBot M1 and M2 are two differential drive motors that we are using in this robot. The motors we are going to use here is DC Geared motor with an encoder from Pololu. The motor terminals are connected to the VNH2SP30 motor driver from Pololu. One of the motors is connected in reverse polarity because in differential steering, one motor rotates opposite to the other. If we send the same control signal to both the motors, each motor will rotate in the opposite direction. To avoid this condition, we will connect it in opposite polarities. The motor driver is connected to Tiva C LaunchPad through a 3.3 V-5 V bidirectional level shifter. One of the level shifter we will use here is available at: https://www.sparkfun.com/products/12009. The two channels of each encoder are connected to Launchpad via a level shifter. Currently, we are using one ultrasonic distance sensor for obstacle detection. In future, we could expand this number, if required. To get a good odometry estimate, we will put IMU sensor MPU 6050 through an I2C interface. The pins are directly connected to Launchpad because MPU6050 is 3.3 V compatible. To reset Launchpad from ROS nodes, we are allocating one pin as the output and connected to reset pin of Launchpad. When a specific character is sent to Launchpad, it will set the output pin to high and reset the device. In some situations, the error from the calculation may accumulate and it can affect the navigation of the robot. We are resetting Launchpad to clear this error. To monitor the battery level, we are allocating another pin to read the battery value. This feature is not currently implemented in the Energia code. The code you downloaded from GitHub consists of embedded code. We can see the main section of the code here and there is no need to explain all the sections because we already discussed it. Writing a ROS Python driver for ChefBot After uploading the embedded code to Launchpad, the next step is to handle the serial data from Launchpad and convert it to ROS Topics for further processing. The launchpad_node.py ROS Python driver node interfaces Tiva C LaunchPad to ROS. The launchpad_node.py file is on the script folder, which is inside the chefbot_bringup package. The following is the explanation of launchpad_node.py in important code sections: #ROS Python client import rospy import sys import time import math   #This python module helps to receive values from serial port which execute in a thread from SerialDataGateway import SerialDataGateway #Importing required ROS data types for the code from std_msgs.msg import Int16,Int32, Int64, Float32, String, Header, UInt64 #Importing ROS data type for IMU from sensor_msgs.msg import Imu The launchpad_node.py file imports the preceding modules. The main modules we can see is SerialDataGateway. This is a custom module written to receive serial data from the Launchpad board in a thread. We also need some data types of ROS to handle the sensor data. The main function of the node is given in the following code snippet: if __name__ =='__main__': rospy.init_node('launchpad_ros',anonymous=True) launchpad = Launchpad_Class() try:      launchpad.Start()    rospy.spin() except rospy.ROSInterruptException:    rospy.logwarn("Error in main function")   launchpad.Reset_Launchpad() launchpad.Stop() The main class of this node is called Launchpad_Class(). This class contains all the methods to start, stop, and convert serial data to ROS Topics. In the main function, we will create an object of Launchpad_Class(). After creating the object, we will call the Start() method, which will start the serial communication between Tiva C LaunchPad and PC. If we interrupt the driver node by pressing Ctrl + C, it will reset the Launchpad and stop the serial communication between the PC and Launchpad. The following code snippet is from the constructor function of Launchpad_Class(). In the following snippet, we will retrieve the port and baud rate of the Launchpad board from ROS parameters and initialize the SerialDateGateway object using these parameters. The SerialDataGateway object calls the _HandleReceivedLine() function inside this class when any incoming serial data arrives on the serial port. This function will process each line of serial data and extract, convert, and insert it to the appropriate headers of each ROS Topic data type: #Get serial port and baud rate of Tiva C Launchpad port = rospy.get_param("~port", "/dev/ttyACM0") baudRate = int(rospy.get_param("~baudRate", 115200))   ################################################################# rospy.loginfo("Starting with serial port: " + port + ", baud rate: " + str(baudRate))   #Initializing SerialDataGateway object with serial port, baud rate and callback function to handle incoming serial data self._SerialDataGateway = SerialDataGateway(port, baudRate, self._HandleReceivedLine) rospy.loginfo("Started serial communication")     ###################################################################Subscribers and Publishers   #Publisher for left and right wheel encoder values self._Left_Encoder = rospy.Publisher('lwheel',Int64,queue_size = 10) self._Right_Encoder = rospy.Publisher('rwheel',Int64,queue_size = 10)   #Publisher for Battery level(for upgrade purpose) self._Battery_Level = rospy.Publisher('battery_level',Float32,queue_size = 10) #Publisher for Ultrasonic distance sensor self._Ultrasonic_Value = rospy.Publisher('ultrasonic_distance',Float32,queue_size = 10)   #Publisher for IMU rotation quaternion values self._qx_ = rospy.Publisher('qx',Float32,queue_size = 10) self._qy_ = rospy.Publisher('qy',Float32,queue_size = 10) self._qz_ = rospy.Publisher('qz',Float32,queue_size = 10) self._qw_ = rospy.Publisher('qw',Float32,queue_size = 10)   #Publisher for entire serial data self._SerialPublisher = rospy.Publisher('serial', String,queue_size=10) We will create the ROS publisher object for sensors such as the encoder, IMU, and ultrasonic sensor as well as for the entire serial data for debugging purpose. We will also subscribe the speed commands for the left-hand side and the right-hand side wheel of the robot. When a speed command arrives on Topic, it calls the respective callbacks to send speed commands to the robot's Launchpad: self._left_motor_speed = rospy.Subscriber('left_wheel_speed',Float32,self._Update_Left_Speed) self._right_motor_speed = rospy.Subscriber('right_wheel_speed',Float32,self._Update_Right_Speed) After setting the ChefBot driver node, we need to interface the robot to a ROS navigation stack in order to perform autonomous navigation. The basic requirement for doing autonomous navigation is that the robot driver nodes, receive velocity command from ROS navigational stack. The robot can be controlled using teleoperation. In addition to these features, the robot must be able to compute its positional or odometry data and generate the tf data for sending into navigational stack. There must be a PID controller to control the robot motor velocity. The following ROS package helps to perform these functions. The differential_drive package contains nodes to perform the preceding operation. We are reusing these nodes in our package to implement these functionalities. The following is the link for the differential_drive package in ROS: http://wiki.ros.org/differential_drive The following figure shows how these nodes communicate with each other. We can also discuss the use of other nodes too: The purpose of each node in the chefbot_bringup package is as follows: twist_to_motors.py: This node will convert the ROS Twist command or linear and angular velocity to individual motor velocity target. The target velocities are published at a rate of the ~rate Hertz and the publish timeout_ticks times velocity after the Twist message stops. The following are the Topics and parameters that will be published and subscribed by this node: Publishing Topics: lwheel_vtarget (std_msgs/Float32): This is the the target velocity of the left wheel(m/s). rwheel_vtarget (std_msgs/Float32): This is the target velocity of the right wheel(m/s). Subscribing Topics: Twist (geometry_msgs/Twist): This is the target Twist command for the robot. The linear velocity in the x direction and angular velocity theta of the Twist messages are used in this robot. Important ROS parameters: ~base_width (float, default: 0.1): This is the distance between the robot's two wheels in meters. ~rate (int, default: 50): This is the rate at which velocity target is published(Hertz). ~timeout_ticks (int, default:2): This is the number of the velocity target message published after stopping the Twist messages. pid_velocity.py: This is a simple PID controller to control the speed of each motors by taking feedback from wheel encoders. In a differential drive system, we need one PID controller for each wheel. It will read the encoder data from each wheels and control the speed of each wheels. Publishing Topics: motor_cmd (Float32): This is the final output of the PID controller that goes to the motor. We can change the range of the PID output using the out_min and out_max ROS parameter. wheel_vel (Float32): This is the current velocity of the robot wheel in m/s. Subscribing Topics: wheel (Int16): This Topic is the output of a rotary encoder. There are individual Topics for each encoder of the robot. wheel_vtarget (Float32): This is the target velocity in m/s. Important parameters: ~Kp (float ,default: 10): This parameter is the proportional gain of the PID controller. ~Ki (float, default: 10): This parameter is the integral gain of the PID controller. ~Kd (float, default: 0.001): This parameter is the derivative gain of the PID controller. ~out_min (float, default: 255): This is the minimum limit of the velocity value to motor. This parameter limits the velocity value to motor called wheel_vel Topic. ~out_max (float, default: 255): This is the maximum limit of wheel_vel Topic(Hertz). ~rate (float, default: 20): This is the rate of publishing wheel_vel Topic. ticks_meter (float, default: 20): This is the number of wheel encoder ticks per meter. This is a global parameter because it's used in other nodes too. vel_threshold (float, default: 0.001): If the robot velocity drops below this parameter, we consider the wheel as stopped. If the velocity of the wheel is less than vel_threshold, we consider it as zero. encoder_min (int, default: 32768): This is the minimum value of encoder reading. encoder_max (int, default: 32768): This is the maximum value of encoder reading. wheel_low_wrap (int, default: 0.3 * (encoder_max - encoder_min) + encoder_min): These values decide whether the odometry is in negative or positive direction. wheel_high_wrap (int, default: 0.7 * (encoder_max - encoder_min) + encoder_min): These values decide whether the odometry is in the negative or positive direction. diff_tf.py: This node computes the transformation of odometry and broadcast between the odometry frame and the robot base frame. Publishing Topics: odom (nav_msgs/odometry): This publishes the odometry (current pose and twist of the robot. tf: This provides transformation between the odometry frame and the robot base link. Subscribing Topics: lwheel (std_msgs/Int16), rwheel (std_msgs/Int16): These are the output values from the left and right encoder of the robot. chefbot_keyboard_teleop.py: This node sends the Twist command using controls from the keyboard. Publishing Topics: cmd_vel_mux/input/teleop (geometry_msgs/Twist): This publishes the twist messages using keyboard commands. After discussing nodes in the chefbot_bringup package, we will look at the functions of launch files. Understanding ChefBot ROS launch files We will discuss the functions of each launch files of the chefbot_bringup package. robot_standalone.launch: The main function of this launch file is to start nodes such as launchpad_node, pid_velocity, diff_tf, and twist_to_motor to get sensor values from the robot and to send command velocity to the robot. keyboard_teleop.launch: This launch file will start the teleoperation by using the keyboard. This launch starts the chefbot_keyboard_teleop.py node to perform the keyboard teleoperation. 3dsensor.launch : This file will launch Kinect OpenNI drivers and start publishing RGB and depth stream. It will also start the depth stream to laser scanner node, which will convert point cloud to laser scan data. gmapping_demo.launch: This launch file will start SLAM gmapping nodes to map the area surrounding the robot. amcl_demo.launch: Using AMCL, the robot can localize and predict where it stands on the map. After localizing on the map, we can command the robot to move to a position on the map, then the robot can move autonomously from its current position to the goal position. view_robot.launch: This launch file displays the robot URDF model in RViz. view_navigation.launch: This launch file displays all the sensors necessary for the navigation of the robot. Summary This article was about assembling the hardware of ChefBot and integrating the embedded and ROS code into the robot to perform autonomous navigation. We assembled individual sections of the robot and connected the prototype PCB that we designed for the robot. This consists of the Launchpad board, motor driver, left shifter, ultrasonic, and IMU. The Launchpad board was flashed with the new embedded code, which can interface all sensors in the robot and can send or receive data from the PC. After discussing the embedded code, we wrote the ROS Python driver node to interface the serial data from the Launchpad board. After interfacing the Launchpad board, we computed the odometry data and differential drive controlling using nodes from the differential_drive package that existed in the ROS repository. We interfaced the robot to ROS navigation stack. This enables to perform SLAM and AMCL for autonomous navigation. We also discussed SLAM, AMCL, created map, and executed autonomous navigation on the robot. Resources for Article: Further resources on this subject: Learning Selenium Testing Tools with Python [article] Prototyping Arduino Projects using Python [article] Python functions – Avoid repeating code [article]
Read more
  • 0
  • 1
  • 7136

article-image-mapreduce-api
Packt
02 Jun 2015
10 min read
Save for later

Map/Reduce API

Packt
02 Jun 2015
10 min read
 In this article by Wagner Roberto dos Santos, author of the book Infinispan Data Grid Platform Definitive Guide, we will see the usage of Map/Reduce API and its introduction in Infinispan. Using the Map/Reduce API According to Gartner, from now on in-memory data grids and in-memory computing will be racing towards mainstream adoption and the market for this kind of technology is going to reach 1 billion by 2016. Thinking along these lines, Infinispan already provides a MapReduce API for distributed computing, which means that we can use Infinispan cache to process all the data stored in heap memory across all Infinispan instances in parallel. If you're new to MapReduce, don't worry, we're going to describe it in the next section in a way that gets you up to speed quickly. An introduction to Map/Reduce MapReduce is a programming model introduced by Google, which allows for massive scalability across hundreds or thousands of servers in a data grid. It's a simple concept to understand for those who are familiar with distributed computing and clustered environments for data processing solutions. You can find the paper about MapReduce in the following link:http://research.google.com/archive/mapreduce.html The MapReduce has two distinct computational phases; as the name states, the phases are map and reduce: In the map phase, a function called Map is executed, which is designed to take a set of data in a given cache and simultaneously perform filtering, sorting operations, and outputs another set of data on all nodes. In the reduce phase, a function called Reduce is executed, which is designed to reduce the final form of the results of the map phase in one output. The reduce function is always performed after the map phase. Map/Reduce in the Infinispan platform The Infinispan MapReduce model is an adaptation of the Google original MapReduce model. There are four main components in each map reduce task, they are as follows: MapReduceTask: This is a distributed task allowing a large-scale computation to be transparently parallelized across Infinispan cluster nodes. This class provides a constructor that takes a cache whose data will be used as the input for this task. The MapReduceTask orchestrates the execution of the Mapper and Reducer seamlessly across Infinispan nodes. Mapper: A Mapper is used to process each input cache entry K,V. A Mapper is invoked by MapReduceTask and is migrated to an Infinispan node, to transform the K,V input pair into intermediate keys before emitting them to a Collector. Reducer: A Reducer is used to process a set of intermediate key results from the map phase. Each execution node will invoke one instance of Reducer and each instance of the Reducer only reduces intermediate keys results that are locally stored on the execution node. Collator: This collates results from reducers executed on the Infinispan cluster and assembles a final result returned to an invoker of MapReduceTask. The following image shows that in a distributed environment, an Infinispan MapReduceTask is responsible for starting the process for a given cache, unless you specify an onKeys(Object...) filter, all available key/value pairs of the cache will be used as input data for the map reduce task:   In the preceding image, the Map/Reduce processes are performing the following steps: The MapReduceTask in the Master Task Node will start the Map Phase by hashing the task input keys and grouping them by the execution node they belong to and then, the Infinispan master node will send a map function and input keys to each node. In each destination, the map will be locally loaded with the corresponding value using the given keys. The map function is executed on each node, resulting in a map< KOut, VOut > object on each node. The Combine Phase is initiated when all results are collected, if a combiner is specified (via combineWith(Reducer<KOut, VOut> combiner) method), the combiner will extract the KOut keys and invoke the reduce phase on keys. Before starting the Reduce Phase, Infinispan will execute an intermediate migration phase, where all intermediate keys and values are grouped. At the end of the Combine Phase, a list of KOut keys are returned to the initial Master Task Node. At this stage, values (VOut) are not returned, because they are not needed in the master node. At this point, Infinispan is ready to start the Reduce Phase; the Master Task Node will group KOut keys by the execution node and send a reduce command to each node where keys are hashed. The reducer is invoked and for each KOut key, the reducer will grab a list of VOut values from a temporary cache belonging to MapReduceTask, wraps it with an iterator, and invokes the reduce method on it. Each reducer will return one map with the KOut/VOut result values. The reduce command will return to the Master Task Node, which in turn will combine all resulting maps into one single map and return it as a result of MapReduceTask. Sample application – find a destination Now that we have seen what map and reduce are, and how the Infinispan model works, let's create a Find Destination application that illustrates the concepts we have discussed. To demonstrate how CDI works, in the last section, we created a web service that provides weather information. Now, based on this same weather information service, let's create a map/reduce engine for the best destination based on simple business rules, such as destination type (sun destination, golf, skiing, and so on). So, the first step is to create the WeatherInfo cache object that will hold information about the weather: public class WeatherInfo implements Serializable {  private static final long serialVersionUID =     -3479816816724167384L;  private String country;  private String city;  private Date day;  private Double temp;  private Double tempMax;  private Double tempMin;  public WeatherInfo(String country, String city, Date day,     Double temp) {    this(country, city, day, temp, temp + 5, temp - 5);  }  public WeatherInfo(String country, String city, Date day,     Double temp,    Double tempMax, Double tempMin) {    super();    this.country = country;    this.city = city;    this.day = day;    this.temperature = temp;    this.temperatureMax = tempMax;    this.temperatureMin = tempMin;  }// Getters and Setters ommitted  @Override  public String toString() {    return "{WeatherInfo:{ country:" + country + ", city:" +       city + ", day:" + day + ", temperature:" + temperature + ",       temperatureMax:" + temperatureMax + ", temperatureMin:" +           temperatureMin + "}";  }} Now, let's create an enum object to define the type of destination a user can select and the rules associated with each destination. To keep it simple, we are going to have only two destinations, sun and skiing. The temperature value will be used to evaluate if the destination can be considered the corresponding type: public enum DestinationTypeEnum {SUN(18d, "Sun Destination"), SKIING(-5d, "Skiing Destination");private Double temperature;private String description;DestinationTypeEnum(Double temperature, String description) {this.temperature = temperature;this.description = description;}public Double getTemperature() {return temperature;}public String getDescription() {return description;} Now it's time to create the Mapper class—this class is going to be responsible for validating whether each cache entry fits the destination requirements. To define the DestinationMapper class, just extend the Mapper<KIn, VIn, KOut, VOut> interface and implement your algorithm in the map method; public class DestinationMapper implementsMapper<String, WeatherInfo, DestinationTypeEnum, WeatherInfo> {private static final long serialVersionUID =-3418976303227050166L;public void map(String key, WeatherInfo weather,Collector<DestinationTypeEnum, WeatherInfo> c) {if (weather.getTemperature() >= SUN.getTemperature()){c.emit(SUN, weather);}else if (weather.getTemperature() <=SKIING.getTemperature()) {c.emit(SKIING, weather);}}} The role of the Reducer class in our application is to return the best destination among all destinations, based on the highest temperature for sun destinations and the lowest temperature for skiing destinations, returned by the mapping phase. To implement the Reducer class, you'll need to implement the Reducer<KOut, VOut> interface: public class DestinationReducer implementsReducer<DestinationTypeEnum, WeatherInfo> {private static final long serialVersionUID = 7711240429951976280L;public WeatherInfo reduce(DestinationTypeEnum key,Iterator<WeatherInfo> it) {WeatherInfo bestPlace = null;if (key.equals(SUN)) {while (it.hasNext()) {WeatherInfo w = it.next();if (bestPlace == null || w.getTemp() >bestPlace.getTemp()) {bestPlace = w;}}} else { /// Best for skiingwhile (it.hasNext()) {WeatherInfo w = it.next();if (bestPlace == null || w.getTemp() <bestPlace.getTemp()) {bestPlace = w;}}}return bestPlace;}} Finally, to execute our sample application, we can create a JUnit test case with the MapReduceTask. But first, we have to create a couple of cache entries before executing the task, which we are doing in the setUp() method: public class WeatherInfoReduceTest {private static final Log logger =LogFactory.getLog(WeatherInfoReduceTest.class);private Cache<String, WeatherInfo> weatherCache;@Beforepublic void setUp() throws Exception {Date today = new Date();EmbeddedCacheManager manager = new DefaultCacheManager();Configuration config = new ConfigurationBuilder().clustering().cacheMode(CacheMode.LOCAL).build();manager.defineConfiguration("weatherCache", config);weatherCache = manager.getCache("weatherCache");WeatherInfoweatherCache.put("1", new WeatherInfo("Germany", "Berlin",today, 12d));weatherCache.put("2", new WeatherInfo("Germany","Stuttgart", today, 11d));weatherCache.put("3", new WeatherInfo("England", "London",today, 8d));weatherCache.put("4", new WeatherInfo("England","Manchester", today, 6d));weatherCache.put("5", new WeatherInfo("Italy", "Rome",today, 17d));weatherCache.put("6", new WeatherInfo("Italy", "Napoli",today, 18d));weatherCache.put("7", new WeatherInfo("Ireland", "Belfast",today, 9d));weatherCache.put("8", new WeatherInfo("Ireland", "Dublin",today, 7d));weatherCache.put("9", new WeatherInfo("Spain", "Madrid",today, 19d));weatherCache.put("10", new WeatherInfo("Spain", "Barcelona",today, 21d));weatherCache.put("11", new WeatherInfo("France", "Paris",today, 11d));weatherCache.put("12", new WeatherInfo("France","Marseille", today, -8d));weatherCache.put("13", new WeatherInfo("Netherlands","Amsterdam", today, 11d));weatherCache.put("14", new WeatherInfo("Portugal", "Lisbon",today, 13d));weatherCache.put("15", new WeatherInfo("Switzerland","Zurich", today, -12d));}@Testpublic void execute() {MapReduceTask<String, WeatherInfo, DestinationTypeEnum,WeatherInfo> task = new MapReduceTask<String, WeatherInfo,DestinationTypeEnum, WeatherInfo>(weatherCache);task.mappedWith(new DestinationMapper()).reducedWith(newDestinationReducer());Map<DestinationTypeEnum, WeatherInfo> destination =task.execute();assertNotNull(destination);assertEquals(destination.keySet().size(), 2);logger.info("********** PRINTING RESULTS FOR WEATHER CACHE*************");for (DestinationTypeEnum destinationType :destination.keySet()){logger.infof("%s - Best Place: %sn",destinationType.getDescription(),destination.get(destinationType));}}} When we execute the application, you should expect to see the following output: INFO: Skiing DestinationBest Place: {WeatherInfo:{ country:Switzerland, city:Zurich,day:Mon Jun 02 19:42:22 IST 2014, temp:-12.0, tempMax:-7.0,tempMin:-17.0}INFO: Sun DestinationBest Place: {WeatherInfo:{ country:Spain, city:Barcelona, day:MonJun 02 19:42:22 IST 2014, temp:21.0, tempMax:26.0, tempMin:16.0} Summary In this article, you learned how to work with applications in modern distributed server architecture, using the Map Reduce API, and how it can abstract parallel programming into two simple primitives, the map and reduce methods. We have seen a sample use case Find Destination that demonstrated how use map reduce almost in real time. Resources for Article: Further resources on this subject: MapReduce functions [Article] Hadoop and MapReduce [Article] Introduction to MapReduce [Article]
Read more
  • 0
  • 0
  • 3561
article-image-developing-location-based-services-neo4j
Packt
02 Jun 2015
22 min read
Save for later

Developing Location-based Services with Neo4j

Packt
02 Jun 2015
22 min read
In this article by Ankur Goel, author of the book, Neo4j Cookbook, we will cover the following recipes: Installing the Neo4j Spatial extension Importing the Esri shapefiles Importing the OpenStreetMap files Importing data using the REST API Creating a point layer using the REST API Finding geometries within the bounding box Finding geometries within a distance Finding geometries within a distance using Cypher By definition, any database that is optimized to store and query the data that represents objects defined in a geometric space is called a spatial database. Although Neo4j is primarily a graph database, due to the importance of geospatial data in today's world, the spatial extension has been introduced in Neo4j as an unmanaged extension. It gives you most of the facilities, which are provided by common geospatial databases along with the power of connectivity through edges, which Neo4j, as a graph database, provides. In this article, we will take a look at some of the widely used use cases of Neo4j as a spatial database, and you will learn how typical geospatial operations can be performed on it. Before proceeding further, you need to install Neo4j on your system using one of the recipies that follow here. The installation will depend on your system type. (For more resources related to this topic, see here.) Single node installation of Neo4j over Linux Neo4j is a highly scalable graph database that runs over all the common platforms; it can be used as is or can be embedded inside applications as well. The following recipe will show you how to set up a single instance of Neo4j over the Linux operating system. Getting ready Perform the following steps to get started with this recipe: Download the community edition of Neo4j from http://www.neo4j.org/download for the Linux platform: $ wget http://dist.neo4j.org/neo4j-community-2.2.0-M02-unix.tar.gz Check whether JDK/JRE is installed for your operating system or not by typing this in the shell prompt: $ echo $JAVA_HOME If this command throws no output, install Java for your Linux distribution and also set the JAVA_HOME path How to do it... Now, let's install Neo4j over the Linux operating system, which is simple, as shown in the following steps: Extract the TAR file using the following command: $ tar –zxvf neo4j-community-2.2.0-M02-unix.tar.gz $ ls Go to the bin directory under the root folder: $ cd <neo4j-community-2.2.0-M02>/bin/ Start the Neo4j graph database server: $ ./neo4j start Check whether Neo4j is running or not using the following command: $ ./neo4j status Neo4j can also be monitored using the web console. Open http://<ip>:7474/webadmin, as shown in the following screenshot: The preceding diagram is a screenshot of the web console of Neo4j through which the server can be monitored and different Cypher queries can be run over the graph database. How it works... Neo4j comes with prebuilt binaries over the Linux operating system, which can be extracted and run over. Neo4j comes with both web-based and terminal-based consoles, over which the Neo4j graph database can be explored. See also During installation, you may face several kind of issues, such as max open files and so on. For more information, check out http://neo4j.com/docs/stable/server-installation.html#linux-install. Single node installation of Neo4j over Windows Neo4j is a highly scalable graph database that runs over all the common platforms; it can be used as is or can be embedded inside applications. The following recipe will show you how to set up a single instance of Neo4j over the Windows operating system. Getting ready Perform the following steps to get started with this recipe: Download the Windows installer from http://www.neo4j.org/download. This has both 32- and 64-bit prebuilt binaries Check whether Java is installed for the operating system or not by typing this in the cmd prompt: echo %JAVA_HOME% If this command throws no output, install Java for your Windows distribution and also set the JAVA_HOME path How to do it... Now, let's install Neo4j over the Windows operating system, which is simple, as shown here: Run the installer by clicking on the downloaded file: The preceding screenshot shows the Windows installer running. After the installation is complete, when you run the software, it will ask for the database location. Carefully choose the location as the entire graph database will be stored in this folder: The preceding screenshot shows the Windows installer asking for the graph database's location. The Neo4j browser can be opened by entering http://localhost:7474/ in the browser. The following screenshot depicts Neo4j started over the Windows platform: How it works... Neo4j comes with prebuilt binaries over the Windows operating system, which can be extracted and run over. Neo4j comes with both web-based and terminal-based consoles, over which the Neo4j graph database can be explored. See also During installation, you might face several kinds of issues such as max open files and so on. For more information, check out http://neo4j.com/docs/stable/server-installation.html#windows-install. Single node installation of Neo4j over Mac OS X Neo4j is a highly scalable graph database that runs over all the common platforms; it can be used as in a mode and can also be embedded inside applications. The following recipe will show you how to set up a single instance of Neo4j over the OS X operating system. Getting ready Perform the following steps to get started with this recipe: Download the binary version of Neo4j from http://www.neo4j.org/download for the Mac OS X platform and the community edition as shown in the following command: $ wget http://dist.neo4j.org/neo4j-community-2.2.0-M02-unix.tar.gz Check whether Java is installed for the operating system or not by typing this over the cmd prompt: $ echo $JAVA_HOME If this command throws no output, install Java for your Mac OS X distribution and also set the JAVA_HOME path How to do it... Now, let's install Neo4j over the OS X operating system, which is very simple, as shown in the following steps: Extract the TAR file using the following command: $ tar –zxvf neo4j-community-2.2.0-M02-unix.tar.gz $ ls Go to the bin directory under the root folder: $ cd <neo4j-community-2.2.0-M02>/bin/ Start the Neo4j graph database server: $ ./neo4j start Check whether Neo4j is running or not using the following command: $ ./neo4j status How it works... Neo4j comes with prebuilt binaries over the OS X operating system, which can be extracted and run over. Neo4j comes with both web-based and terminal-based consoles, over which the Neo4j graph database can be explored. There's more… Neo4j over Mac OS X can also be installed using brew, which has been explained here. Run the following commands over the shell: $ brew update $ brew install neo4j After this, Neo4j can be started using the start option with the Neo4j command: $ neo4j start This will start the Neo4j server, which can be accessed from the default URL (http://localhost:7474). The installation can be reached using the following commands: $ cd /usr/local/Cellar/neo4j/ $ cd {NEO4J_VERSION}/libexec/ You can learn more about OS X installation from http://neo4j.com/docs/stable/server-installation.html#osx-install. Due to the limitation of content that can provided in this article, we assume you would already know how to perform the basic operations using Neo4j such as creating a graph, importing data from different formats into Neo4j, the common configurations used for Neo4j. Installing the Neo4j Spatial extension Neo4j Spatial is a library of utilities for Neo4j that facilitates the enabling of spatial operations on the data. Even on the existing data, geospatial indexes can be added and many geospatial operations can be performed on it. In this recipe, you will learn how to install the Neo4j Spatial extension. Getting ready The following steps will get you started with this recipe: Install Neo4j using the earlier recipes in this article. Install the dependencies listed in the pom.xml file for this project from https://github.com/neo4j-contrib/spatial/blob/master/pom.xml. Install Maven using the following command for your operating system: For Debian systems: apt-get install maven For Redhat/Centos systems: yum install apache-maven To install on a Windows-based system, please refer to https://maven.apache.org/guides/getting-started/windows-prerequisites.html. How to do it... Now, let's install the Neo4j Spatial plugin, which is very simple to do, by following these steps: Clone the GitHub repository for spatial extension: git clone git://github.com/neo4j/spatial spatial Move into the spatial directory: cd spatial Build the code using Maven. This will download all the dependencies, compile the library, run the tests, and install the artifacts in the local repository: mvn clean install Move the built artifact to the Neo4j plugin directory: unzip target/neo4j/neo4j-spatial-0.11-SNAPSHOT-server-plugin.zip $NEO4J_ROOT_DIR/plugins/ Restart the Neo4j graph database server: $NEO4J_ROOT_DIR/bin/neo4j restart Check whether the Neo4j Spatial plugin is properly installed or not: curl –L http://<neo4j_server_ip>:<port>/db/data If you are using Neo4j 2.2 or higher, then use the following command: curl --user neo4j:<password> http://localhost:7474/db/data/ The output will look like what is shown in the following screenshot, which shows the Neo4j Spatial plugin installed: How it works... Neo4j Spatial is a library of utilities that helps perform geospatial operations on the dataset, which is present in the Neo4j graph database. You can add geospatial indexes on the existing data and perform operations, such as data within a specified region or within some distance of point of interest. Neo4j Spatial comes as an unmanaged extension, which can be easily installed as well as removed from Neo4j. The extension does not interfere with any of the core functionality. There's more… To read more about Neo4j Spatial extension, we encourage users to visit the GitHub repository at https://github.com/neo4j-contrib/spatial. Also, it will be good to read about the Neo4j unmanaged extension in general (http://neo4j.com/docs/stable/server-unmanaged-extensions.html). Importing the Esri shapefiles The shapefile format is a popular geospatial vector data format for the Geographic Information System (GIS) software. It is developed and regulated by Esri as an open specification for data interoperability among Esri. It is very popular among GIS products, and many times, the data format is in Esri shapefiles. The main file is the .shp file, which contains the geometry data. The binary data file consists of a single, fixed-length header followed by variable-length data records. In this recipe, you will learn how to import the Esri shapefiles in the Neo4j graph database. Getting ready Perform the following steps to get started with this recipe: Install Neo4j using the earlier recipies in this article. Install the Neo4j Spatial plugin using the recipe Installing the Neo4j Spatial extension, from this article. Restart the Neo4j graph database server using the following command: $NEO4J_ROOT_DIR/bin/neo4j restart How to do it... Since the Esri shapefile format is, by default, supported by the Neo4j Spatial extension, it is very easy to import data using the Java API from it using the following steps: Download a sample .shp file from http://www.statsilk.com/maps/download-free-shapefile-maps. Execute the following commands: wget http://biogeo.ucdavis.edu/data/diva/adm/AFG_adm.zip unzip AFG_adm.zip mv AFG_adm1.* /data The ShapefileImporter method lets you import data from the Esri shapefile using the following code: GraphDatabaseService esri_database = new GraphDatabaseFactory().newEmbeddedDatabase(storeDir); try {    ShapefileImporter importer = new ShapefileImporter(esri_database); importer.importFile("/data/AFG_adm1.shp", "layer_afganistan");        } finally {            esri_database.shutdown(); } Using similar code, we can import multiple SHP files into the same layer or different layers, as shown in the following code snippet: File dir = new File("/data");      FilenameFilter filter = new FilenameFilter() {          public boolean accept(File dir, String name) {      return name.endsWith(".shp"); }};   File[] listOfFiles = dir.listFiles(filter); for (final File fileEntry : listOfFiles) {      System.out.println("FileEntry Directory "+fileEntry);    try {    importer.importFile(fileEntry.toString(), "layer_afganistan"); } catch(Exception e){    esri_database.shutdown(); }} How it works... The Neo4j Spatial extension natively supports the import of data in the shapefile format. Using the ShapefileImporter method, any SHP file can be easily imported into Neo4j. The ShapefileImporter method takes two arguments: the first argument is the path to the SHP files and the second is the layer in which it should be imported. There's more… We will encourage you to read more about shapefiles and layers in general; for this, please visit the following URLs for more information: http://en.wikipedia.org/wiki/Shapefile http://wiki.openstreetmap.org/wiki/Shapefiles http://www.gdal.org/drv_shapefile.html Importing the OpenStreetMap files OpenStreetMap is a powerhouse of data when it comes to geospatial data. It is a collaborative project to create a free, editable map of the world. OpenStreetMap provides geospatial data in the .osm file format. To read more about .osm files in general, check out http://wiki.openstreetmap.org/wiki/.osm. In this recipe, you will learn how to import the .osm files in the Neo4j graph database. Getting ready Perform the following steps to get started with this recipe: Install Neo4j using the earlier recipies in this article. Install the Neo4j Spatial plugin using the recipe Installing the Neo4j Spatial extension, from this article. Restart the Neo4j graph database server: $NEO4J_ROOT_DIR/bin/neo4j restart How to do it... Since the OSM file format is, by default, supported by the Neo4j Spatial extension, it is very easy to import data from it using the following steps: Download one sample .osm file from http://wiki.openstreetmap.org/wiki/Planet.osm#Downloading. Execute the following commands: wget http://download.geofabrik.de/africa-latest.osm.bz2 bunzip2 africa-latest.osm.bz2 mv africa-latest.osm /data The importfile method lets you import data from the .osm file, as shown in the following code snippet: OSMImporter importer = new OSMImporter("africa"); try {    importer.importFile(osm_database, "/data/botswana-latest.osm", false, 5000, true); } catch(Exception e){    osm_database.shutdown(); } importer.reIndex(osm_database,10000); Using similar code, we can import multiple OSM files into the same layer or different layers, as shown here: File dir = new File("/data");      FilenameFilter filter = new FilenameFilter() {          public boolean accept(File dir, String name) {      return name.endsWith(".osm"); }}; File[] listOfFiles = dir.listFiles(filter); for (final File fileEntry : listOfFiles) {      System.out.println("FileEntry Directory "+fileEntry);    try {importer.importFile(osm_database, fileEntry.toString(), false, 5000, true); importer.reIndex(osm_database,10000); } catch(Exception e){    osm_database.shutdown(); } How it works... This is slightly more complex as it requires two phases: the first phase requires a batch inserter performing insertions into the database, and the second phase requires reindexing of the database with the spatial indexes. There's more… We will encourage you to read more about the OSM file and the batch inserter in general; for this, visit the following URLs: http://en.wikipedia.org/wiki/OpenStreetMap http://wiki.openstreetmap.org/wiki/OSM_file_formats http://neo4j.com/api_docs/2.0.2/org/neo4j/unsafe/batchinsert/BatchInserter.html Importing data using the REST API The recipes that you have learned until now consist of Java code, which is used to import spatial data into Neo4j. However, by using any other programming language, such as Python or Ruby, spatial data can be easily imported into Neo4j using the REST interface. In this recipe, you will learn how to import geospatial data using the REST interface. Getting ready Perform the following steps to get started with this recipe: Install Neo4j using the earlier recipies in this article. Install the Neo4j Spatial plugin using the recipe Installing the Neo4j Spatial extension, from this article. Restart the Neo4j graph database server: $NEO4J_ROOT_DIR/bin/neo4j restart How to do it... Using the REST interface is a very simple three-stage process to import the geospatial data into the Neo4j graph database server. For the sake of simplicity, the code of the Python language has been used to explain this recipe, although you can also use curl for this recipe: Create the spatial index, as shown in the following code: # Create geom index url = http://<neo4j_server_ip>:<port>/db/data/index/node/ payload= {    "name" : "geom",    "config" : {        "provider" : "spatial",        "geometry_type" : "point",        "lat" : "lat",        "lon" : "lon"    } } Create nodes as lat/lng and data as properties, as shown in the following code: url = "http://<neo4j_server_ip>:<port>/db/data/node" payload = {'lon': 38.6, 'lat': 67.88, 'name': 'abc'} req = urllib2.Request(url) req.add_header('Content-Type', 'application/json') response = urllib2.urlopen(req, json.dumps(payload)) node = json.loads(response.read())['self'] Add the preceding created node to the geospatial index, as shown in the following code snippet: #add node to geom index url = "http://<neo4j_server_ip>:<port>/db/data/index/node/geom" payload = {'value': 'dummy', 'key': 'dummy', 'uri': node} req = urllib2.Request(url) req.add_header('Content-Type', 'application/json') response = urllib2.urlopen(req, json.dumps(payload)) print response.read() The data will look like what is shown in the following screenshot after the addition of a few more nodes; this screenshot depicts the Neo4j Spatial data that has been imported: The following screenshot depicts the properties of a single node, which has been imported into Neo4j: How it works... Adding geospatial data using the REST API is a three-step process, listed as follows: Create a geospatial index using an endpoint, by following this URL as a template: http://<neo4j_server_ip>:<port>/db/data/index/node/ Add a node to the Neo4j graph database using an endpoint, by following this URL as a template: http://<neo4j_server_ip>:<port>/db/data/node Add the created node to the geospatial index using the endpoint, by following this URL as a template: http://<neo4j_server_ip>:<port>/db/data/index/node/geom There's more… We encourage you to read more about the spatial REST interfaces in general (http://neo4j-contrib.github.io/spatial/). Creating a point layer using the REST API In this recipe, you will learn how to create a point layer using the REST API interface. Getting ready Perform the following steps to get started with this recipe: Install Neo4j using the earlier recipies in this article. Install the Neo4j Spatial plugin using the recipe Installing the Neo4j Spatial extension, from this article. Restart the Neo4j graph database server using the following command: $NEO4J_ROOT_DIR/bin/neo4j restart How to do it... In this recipe, we will use the http://<neo4j_server_ip>:/db/data/ext/ SpatialPlugin/graphdb/addSimplePointlayer endpoint to create a simple point layer. Let's add a simple point layer, as shown in the following code: "layer" : "geom", "lat"   : "lat" , "lon"   : "lon",url = "http://<neo4j_server_ip>:<port>//db/data/ext/SpatialPlugin/graphdb/addSimplePointlayer payload= { "layer" : "geom", "lat"   : "lat" , "lon"   : "lon", } r = requests.post(url, data=json.dumps(payload), headers=headers) The data will look like what is shown in the following screenshot; this screenshot shows the output of the create point in layer query: How it works... Creating a point in the layer query is based on the REST interface, which the Neo4j Spatial plugin already provides with it. There's more… We will encourage you to read more about spatial REST interfaces in general; to do this, visit http://neo4j-contrib.github.io/spatial/. Finding geometries within the bounding box In this recipe, you will learn how to find all the geometries within the bounding box using the spatial REST interface. Getting ready Perform the following steps to get started with this recipe: Install Neo4j using the earlier recipies in this article. Install the Neo4j Spatial plugin using the recipe Installing the Neo4j Spatial extension, from this article. Restart the Neo4j graph database server using the following command: $NEO4J_ROOT_DIR/bin/neo4j restart How to do it... In this recipe, we will use the following endpoint to find all the geometries within the bounding box:http://<neo4j_server_ip>:<port>/db/data/ext/SpatialPlugin/graphdb/findGeometriesInBBox. Let's find the all the geometries, using the following information: "minx" : 0.0, "maxx" : 100.0, "miny" : 0.0, "maxy" : 100.0 url = "http://<neo4j_server_ip>:<port>//db/data/ext/SpatialPlugin/graphdb payload= { "layer" : "geom", "minx" : 0.0, "maxx" : 100.3, "miny" : 0.0, "maxy" : 100.0 } r = requests.post(url, data=json.dumps(payload), headers=headers) The data will look like what is shown in the following screenshot; this screenshot shows the output of the bounding box query: How it works... Finding geometries in the bounding box is based on the REST interface, which the Neo4j Spatial plugin provides. The output of the REST call contains an array of the nodes, containing the node's id, lat/lng, and its incoming/outgoing relationships. In the preceding output, you can see node id54 returned as the output. There's more… We will encourage you to read more about spatial REST interfaces in general; to do this, visit http://neo4j-contrib.github.io/spatial/. Finding geometries within a distance In this recipe, you will learn how to find all the geometries within a distance using the spatial REST interface. Getting ready Perform the following steps to get started with this recipe: Install Neo4j using the earlier recipies in this article. Install the Neo4j Spatial plugin using the recipe Installing the Neo4j Spatial extension, from this article. Restart the Neo4j graph database server using the following command: $NEO4J_ROOT_DIR/bin/neo4j restart How to do it... In this recipe, we will use the following endpoint to find all the geometries within a certain distance: http://<neo4j_server_ip>:<port>/db/data/ext/SpatialPlugin/graphdb/findGeometriesWithinDistance. Let's find all the geometries between the specified distance using the following information: "pointX" : -116.67, "pointY" : 46.89, "distanceinKm" : 500, url = "http://<neo4j_server_ip>:<port>//db/data/ext/SpatialPlugin/graphdb/findGeometriesWithinDistance payload= { "layer" : "geom", "pointY" : 46.8625, "pointX" : -114.0117, "distanceInKm" : 500, } r = requests.post(url, data=json.dumps(payload), headers=headers) The data will look like what is shown in the following screenshot; this screenshot shows the output of a withinDistance query: How it works... Finding geometries within a distance is based on the REST interface that the Neo4j Spatial plugin provides. The output of the REST call contains an array of the nodes, containing the node's id, lat/lng, and its incoming/outgoing relationships. In the preceding output, we can see node id71 returned as the output. There's more… We encourage you to read more about the spatial REST interfaces in general (http://neo4j-contrib.github.io/spatial/). Finding geometries within a distance using Cypher In this recipe, you will learn how to find all the geometries within a distance using the Cypher query. Getting ready Perform the following steps to get started with this recipe: Install Neo4j using the earlier recipies in this article. Install the Neo4j Spatial plugin using the recipe Installing the Neo4j Spatial extension, from this article. Restart the Neo4j graph database server: $NEO4J_ROOT_DIR/bin/neo4j restart How to do it... In this recipe, we will use the following endpoint to find all the geometries within a certain distance: http://<neo4j_server_ip>:<port>/db/data/cipher Let's find all the geometries within a distance using a Cypher query: "pointX" : -116.67, "pointY" : 46.89, "distanceinKm" : 500, url = "http://<neo4j_server_ip>:<port>//db/data/cypher payload= { "query" : "START n=node:geom('withinDistance:[46.9163, -114.0905, 500.0]') RETURN n" } r = requests.post(url, data=json.dumps(payload), headers=headers) The data will look like what is shown in the following screenshot; this screenshot shows the output of the withinDistance query that uses Cypher: The following is the Cypher output in the Neo4j console: How it works... Cypher comes with a withinDistance query, which takes three parameters: lat, lon, and search distance. There's more… We will encourage you to read more about the spatial REST interfaces in general (http://neo4j-contrib.github.io/spatial/). Summary Developing Location-based Services with Neo4j, teaches you the most important aspect of today's data, location, and how to deal with it in Neo4j. You have learnt how to import geospatial data into Neo4j and run queries, such as proximity searches, bounding boxes, and so on. Resources for Article:   Further resources on this subject: Recommender systems dissected Components [article] Working with a Neo4j Embedded Database [article] Differences in style between Java and Scala code [article]
Read more
  • 0
  • 0
  • 3421

article-image-basic-image-processing
Packt
02 Jun 2015
8 min read
Save for later

Basic Image Processing

Packt
02 Jun 2015
8 min read
In this article, Ashwin Pajankar, the author of the book, Raspberry PI Computer Vision Programming, takes us through basic image processing in OpenCV. We will do this with the help of the following topics: Image arithmetic operations—adding, subtracting, and blending images Splitting color channels in an image Negating an image Performing logical operations on an image This article is very short and easy to code with plenty of hands-on activities. (For more resources related to this topic, see here.) Arithmetic operations on images In this section, we will have a look at the various arithmetic operations that can be performed on images. Images are represented as matrices in OpenCV. So, arithmetic operations on images are similar to the arithmetic operations on matrices. Images must be of the same size for you to perform arithmetic operations on the images, and these operations are performed on individual pixels. cv2.add(): This function is used to add two images, where the images are passed as parameters. cv2.subtract(): This function is used to subtract an image from another. We know that the subtraction operation is not commutative. So, cv2.subtract(img1,img2) and cv2.(img2,img1) will yield different results, whereas cv2.add(img1,img2) and cv2.add(img2,img1) will yield the same result as the addition operation is commutative. Both the images have to be of same size and type, as explained before. Check out the following code: import cv2 img1 = cv2.imread('/home/pi/book/test_set/4.2.03.tiff',1) img2 = cv2.imread('/home/pi/book/test_set/4.2.04.tiff',1) cv2.imshow('Image1',img1) cv2.waitKey(0) cv2.imshow('Image2',img2) cv2.waitKey(0) cv2.imshow('Addition',cv2.add(img1,img2)) cv2.waitKey(0) cv2.imshow('Image1-Image2',cv2.subtract(img1,img2)) cv2.waitKey(0) cv2.imshow('Image2-Image1',cv2.subtract(img2,img1)) cv2.waitKey(0) cv2.destroyAllWindows() The preceding code demonstrates the usage of arithmetic functions on images. Here's the output window of Image1: Here is the output window of Addition: The output window of Image1-Image2 looks like this: Here is the output window of Image2-Image1: Blending and transitioning images The cv2.addWeighted() function calculates the weighted sum of two images. Because of the weight factor, it provides a blending effect to the images. Add the following lines of code before destroyAllWindows() in the previous code listing to see this function in action: cv2.addWeighted(img1,0.5,img2,0.5,0) cv2.waitKey(0) In the preceding code, we passed the following five arguments to the addWeighted() function: Img1: This is the first image. Alpha: This is the weight factor for the first image (0.5 in the example). Img2: This is the second image. Beta: This is the weight factor for the second image (0.5 in the example). Gamma: This is the scalar value (0 in the example). The output image value is calculated with the following formula: This operation is performed on every individual pixel. Here is the output of the preceding code: We can create a film-style transition effect on the two images by using the same function. Check out the output of the following code that creates a smooth image transition from an image to another image: import cv2 import numpy as np import time   img1 = cv2.imread('/home/pi/book/test_set/4.2.03.tiff',1) img2 = cv2.imread('/home/pi/book/test_set/4.2.04.tiff',1)   for i in np.linspace(0,1,40): alpha=i beta=1-alpha print 'ALPHA ='+ str(alpha)+' BETA ='+str (beta) cv2.imshow('Image Transition',    cv2.addWeighted(img1,alpha,img2,beta,0)) time.sleep(0.05) if cv2.waitKey(1) == 27 :    break   cv2.destroyAllWindows() Splitting and merging image colour channels On several occasions, we may be interested in working separately with the red, green, and blue channels. For example, we might want to build a histogram for every channel of an image. Here, cv2.split() is used to split an image into three different intensity arrays for each color channel, whereas cv2.merge() is used to merge different arrays into a single multi-channel array, that is, a color image. The following example demonstrates this: import cv2 img = cv2.imread('/home/pi/book/test_set/4.2.03.tiff',1) b,g,r = cv2.split (img) cv2.imshow('Blue Channel',b) cv2.imshow('Green Channel',g) cv2.imshow('Red Channel',r) img=cv2.merge((b,g,r)) cv2.imshow('Merged Output',img) cv2.waitKey(0) cv2.destroyAllWindows() The preceding program first splits the image into three channels (blue, green, and red) and then displays each one of them. The separate channels will only hold the intensity values of the particular color and the images will essentially be displayed as grayscale intensity images. Then, the program merges all the channels back into an image and displays it. Creating a negative of an image In mathematical terms, the negative of an image is the inversion of colors. For a grayscale image, it is even simpler! The negative of a grayscale image is just the intensity inversion, which can be achieved by finding the complement of the intensity from 255. A pixel value ranges from 0 to 255, and therefore, negation involves the subtracting of the pixel value from the maximum value, that is, 255. The code for the same is as follows: import cv2 img = cv2.imread('/home/pi/book/test_set/4.2.07.tiff') grayscale = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) negative = abs(255-grayscale) cv2.imshow('Original',img) cv2.imshow('Grayscale',grayscale) cv2.imshow('Negative',negative) cv2.waitKey(0) cv2.destroyAllWindows() Here is the output window of Greyscale: Here's the output window of Negative: The negative of a negative will be the original grayscale image. Try this on your own by taking the image negative of the negative again Logical operations on images OpenCV provides bitwise logical operation functions for images. We will have a look at the functions that provide the bitwise logical AND, OR, XOR (exclusive OR), and NOT (inversion) functionality. These functions can be better demonstrated visually with grayscale images. I am going to use barcode images in horizontal and vertical orientation for demonstration. Let's have a look at the following code: import cv2 import matplotlib.pyplot as plt   img1 = cv2.imread('/home/pi/book/test_set/Barcode_Hor.png',0) img2 = cv2.imread('/home/pi/book/test_set/Barcode_Ver.png',0) not_out=cv2.bitwise_not(img1) and_out=cv2.bitwise_and(img1,img2) or_out=cv2.bitwise_or(img1,img2) xor_out=cv2.bitwise_xor(img1,img2)   titles = ['Image 1','Image 2','Image 1 NOT','AND','OR','XOR'] images = [img1,img2,not_out,and_out,or_out,xor_out]   for i in xrange(6):    plt.subplot(2,3,i+1)    plt.imshow(images[i],cmap='gray')    plt.title(titles[i])    plt.xticks([]),plt.yticks([]) plt.show() We first read the images in grayscale mode and calculated the NOT, AND, OR, and XOR, functionalities and then with matplotlib, we displayed those in a neat way. We leveraged the plt.subplot() function to display multiple images. Here in the preceding example, we created a grid with two rows and three columns for our images and displayed each image in every part of the grid. You can modify this line and change it to plt.subplot(3,2,i+1) to create a grid with three rows and two columns. Also, we can use the technique without a loop in the following way. For each image, you have to write the following statements. I will write the code for the first image only. Go ahead and write it for the rest of the five images: plt.subplot(2,3,1) , plt.imshow(img1,cmap='gray') , plt.title('Image 1') , plt.xticks([]),plt.yticks([]) Finally, use plt.show() to display. This technique is to avoid the loop when a very small number of images, usually 2 or 3 in number, have to be displayed. The output of this is as follows: Make a note of the fact that the logical NOT operation is the negative of the image. Exercise You may want to have a look at the functionality of cv2.copyMakeBorder(). This function is used to create the borders and paddings for images, and many of you will find it useful for your projects. The exploring of this function is left as an exercise for the readers. You can check the python OpenCV API documentation at the following location: http://docs.opencv.org/modules/refman.html Summary In this article, we learned how to perform arithmetic and logical operations on images and split images by their channels. We also learned how to display multiple images in a grid by using matplotlib. Resources for Article: Further resources on this subject: Raspberry Pi and 1-Wire [article] Raspberry Pi Gaming Operating Systems [article] The Raspberry Pi and Raspbian [article]
Read more
  • 0
  • 0
  • 21390

article-image-implementing-membership-roles-permissions-and-features
Packt
02 Jun 2015
34 min read
Save for later

Implementing Membership Roles, Permissions, and Features

Packt
02 Jun 2015
34 min read
In this article by Rakhitha Nimesh Ratnayake, author of the book WordPress Web Application Development - Second Edition, we will see how to implement frontend registration and how to create a login form in the frontend. (For more resources related to this topic, see here.) Implementing frontend registration Fortunately, we can make use of the existing functionalities to implement registration from the frontend. We can use a regular HTTP request or AJAX-based technique to implement this feature. In this article, I will focus on a normal process instead of using AJAX. Our first task is to create the registration form in the frontend. There are various ways to implement such forms in the frontend. Let's look at some of the possibilities as described in the following section: Shortcode implementation Page template implementation Custom template implementation Now, let's look at the implementation of each of these techniques. Shortcode implementation Shortcodes are the quickest way to add dynamic content to your pages. In this situation, we need to create a page for registration. Therefore, we need to create a shortcode that generates the registration form, as shown in the following code: add_shortcode( "register_form", "display_register_form" );function display_register_form(){$html = "HTML for registration form";return $html;} Then, you can add the shortcode inside the created page using the following code snippet to display the registration form: [register_form] Pros and cons of using shortcodes Following are the pros and cons of using shortcodes: Shortcodes are easy to implement in any part of your application Its hard to manage the template code assigned using the PHP variables There is a possibility of the shortcode getting deleted from the page by mistake Page template implementation Page templates are a widely used technique in modern WordPress themes. We can create a page template to embed the registration form. Consider the following code for a sample page template: /** Template Name : Registration*/HTML code for registration form Next, we have to copy the template inside the theme folder. Finally, we can create a page and assign the page template to display the registration form. Now, let's look at the pros and cons of this technique. Pros and cons of page templates Following are the pros and cons of page templates: A page template is more stable than shortcode. Generally, page templates are associated with the look of the website rather than providing dynamic forms. The full width page, two-column page, and left sidebar page are some common implementations of page templates. A template is managed separately from logic, without using PHP variables. The page templates depend on the theme and need to be updated on theme switching. Custom template implementation Experienced web application developers will always look to separate business logic from view templates. This will be the perfect technique for such people. In this technique, we will create our own independent templates by intercepting the WordPress default routing process. An implementation of this technique starts from the next section on routing. Building a simple router for a user module Routing is one of the important aspects in advanced application development. We need to figure out ways of building custom routes for specific functionalities. In this scenario, we will create a custom router to handle all the user-related functionalities of our application. Let's list the requirements for building a router: All the user-related functionalities should go through a custom URL, such as http://www.example.com/user Registration should be implemented at http://www.example.com/user/register Login should be implemented at http://www.example.com/user/login Activation should be implemented at http://www.example.com/user/activate Make sure to set up your permalinks structure to post name for the examples in this article. If you prefer a different permalinks structure, you will have to update the URLs and routing rules accordingly. As you can see, the user section is common for all the functionalities. The second URL segment changes dynamically based on the functionality. In MVC terms, user acts as the controller and the next URL segment (register, login, and activate) acts as the action. Now, let's see how we can implement a custom router for the given requirements. Creating the routing rules There are various ways and action hooks used to create custom rewrite rules. We will choose the init action to define our custom routes for the user section, as shown in the following code: public function manage_user_routes() {add_rewrite_rule( '^user/([^/]+)/?','index.php?control_action=$matches[1]', 'top' );} Based on the discussed requirements, all the URLs for the user section will follow the /user/custom action pattern. Therefore, we will define the regular expression for matching all the routes in the user section. Redirection is made to the index.php file with a query variable called control_action. This variable will contain the URL segment after the /user segment. The third parameter of the add_rewrite_rule function will decide whether to check this rewrite rule before the existing rules or after them. The value of top will give a higher precedence, while the value of bottom will give a lower precedence. We need to complete two other tasks to get these rewriting rules to take effect: Add query variables to the WordPress query_vars Flush the rewriting rules Adding query variables WordPress doesn't allow you to use any type of variable in the query string. It will check for query variables within the existing list and all other variables will be ignored. Whenever we want to use a new query variable, make sure to add it to the existing list. First, we need to update our constructor with the following filter to customize query variables: add_filter( 'query_vars', array( $this, 'manage_user_routes_query_vars' ) ); This filter on query_vars will allow us to customize the list of existing variables by adding or removing entries from an array. Now, consider the implementation to add a new query variable: public function manage_user_routes_query_vars( $query_vars ) {$query_vars[] = 'control_action';return $query_vars;} As this is a filter, the existing query_vars variable will be passed as an array. We will modify the array by adding a new query variable called control_action and return the list. Now, we have the ability to access this variable from the URL. Flush the rewriting rules Once rewrite rules are modified, it's a must to flush the rules in order to prevent 404 page generation. Flushing existing rules is a time consuming task, which impacts the performance of the application and hence should be avoided in repetitive actions such as init. It's recommended that you perform such tasks in plugin activation or installation as we did earlier in user roles and capabilities. So, let's implement the function for flushing rewrite rules on plugin activation: public function flush_application_rewrite_rules() {flush_rewrite_rules();} As usual, we need to update the constructor to include the following action to call the flush_application_rewrite_rules function: register_activation_hook( __FILE__, array( $this,'flush_application_rewrite_rules' ) ); Now, go to the admin panel, deactivate the plugin, and activate the plugin again. Then, go to the URL http://www.example.com/user/login and check whether it works. Unfortunately, you will still get the 404 error for the request. You might be wondering what went wrong. Let's go back and think about the process in order to understand the issue. We flushed the rules on plugin activation. So, the new rules should persist successfully. However, we will define the rules on the init action, which is only executed after the plugin is activated. Therefore, new rules will not be available at the time of flushing. Consider the updated version of the flush_application_rewrite_rules function for a quick fix to our problem: public function flush_application_rewrite_rules() {$this->manage_user_routes();flush_rewrite_rules();} We call the manage_user_routes function on plugin activation, followed by the call to flush_rewrite_rules. So, the new rules are generated before flushing is executed. Now, follow the previous process once again; you won't get a 404 page since all the rules have taken effect. You can get 404 errors due to the modification in rewriting rules and not flushing it properly. In such situations, go to the Permalinks section on the Settings page and click on the Save Changes button to flush the rewrite rules manually. Now, we are ready with our routing rules for user functionalities. It's important to know the existing routing rules of your application. Even though we can have a look at the routing rules from the database, it's difficult to decode the serialized array, as we encountered in the previous section. So, I recommend that you use the free plugin called Rewrite Rules Inspector. You can grab a copy at http://wordpress.org/plugins/rewrite-rules-inspector/. Once installed, this plugin allows you to view all the existing routing rules as well as offers a button to flush the rules, as shown in the following screen: Controlling access to your functions We have a custom router, which handles the URLs of the user section of our application. Next, we need a controller to handle the requests and generate the template for the user. This works similar to the controllers in the MVC pattern. Even though we have changed the default routing, WordPress will look for an existing template to be sent back to the user. Therefore, we need to intercept this process and create our own templates. WordPress offers an action hook called template_redirect for intercepting requests. So, let's implement our frontend controller based on template_redirect. First, we need to update the constructor with the template_redirect action, as shown in the following code: add_action( 'template_redirect', array( $this, 'front_controller' ) ); Now, let's take a look at the implementation of the front_controller function using the following code: public function front_controller() {global $wp_query;$control_action = isset ( $wp_query->query_vars['control_action'] ) ? $wp_query->query_vars['control_action'] : ''; ;switch ( $control_action ) {case 'register':do_action( 'wpwa_register_user' );break;}} We will be handling custom routes based on the value of the control_action query variable assigned in the previous section. The value of this variable can be grabbed through the global query_vars array of the $wp_query object. Then, we can use a simple switch statement to handle the controlling based on the action. The first action to consider will be to register as we are in the registration process. Once the control_action query variable is matched with registration, we will call a handler function using do_action. You might be confused why we use do_action in this scenario. So, let's consider the same implementation in a normal PHP application, where we don't have the do_action hook: switch ( $control_action ) {case 'register':$this->register_user();break;} This is the typical scenario where we call a function within the class or in an external class to implement the registration. In the previous code, we called a function within the class, but with the do_action hook instead of the usual function call. The advantages of using the do_action function WordPress action hooks define specific points in the execution process, where we can develop custom functions to modify existing behavior. In this scenario, we are calling the wpwa_register_user function within the class using do_action. Unlike websites or blogs, web applications need to be extendable with future requirements. Think of a situation where we only allow Gmail addresses for user registration. This Gmail validation is not implemented in the original code. Therefore, we need to change the existing code to implement the necessary validations. Changing a working component is considered bad practice in application development. Let's see why it's considered as a bad practice by looking at the definition of the open/closed principle on Wikipedia. "Open/closed principle states "software entities (classes, modules, functions, and so on) should be open for extension, but closed for modification"; that is, such an entity can allow its behavior to be modified without altering its source code. This is especially valuable in a production environment, where changes to the source code may necessitate code reviews, unit tests, and other such procedures to qualify it for use in a product: the code obeying the principle doesn't change when it is extended, and therefore, needs no such effort." WordPress action hooks come to our rescue in this scenario. We can define an action for registration using the add_action function, as shown in the following code: add_action( 'wpwa_register_user', array( $this, 'register_user' ) ); Now, you can implement this action multiple times using different functions. In this scenario, register_user will be our primary registration handler. For Gmail validation, we can define another function using the following code: add_action( 'wpwa_register_user', array( $this, 'validate_gmail_registration') ); Inside this function, we can make the necessary validations, as shown in the following code: public function validate_user(){// Code to validate user// remove registration function if validation failsremove_action( 'wpwa_register_user', array( $this,'register_user' ) );} Now, the validate_user function is executed before the primary function. So, we can remove the primary registration function if something goes wrong in validation. With this technique, we have the capability of adding new functionalities as well as changing existing functionalities without affecting the already written code. We have implemented a simple controller, which can be quite effective in developing web application functionalities. In the following sections, we will continue the process of implementing registration on the frontend with custom templates. Creating custom templates Themes provide a default set of templates to cater to the existing behavior of WordPress. Here, we are trying to implement a custom template system to suit web applications. So, our first option is to include the template files directly inside the theme. Personally, I don't like this option due to two possible reasons: Whenever we switch the theme, we have to move the custom template files to a new theme. So, our templates become theme dependent. In general, all existing templates are related to CMS functionality. Mixing custom templates with the existing ones becomes hard to manage. As a solution to these concerns, we will implement the custom templates inside the plugin. First, create a folder inside the current plugin folder and name it as templates to get things started. Designing the registration form We need to design a custom form for frontend registration containing the default header and footer. The whole content area will be used for the registration and the default sidebar will be omitted for this screen. Create a PHP file called register-template.php inside the templates folder with the following code: <?php get_header(); ?><div id="wpwa_custom_panel"><?phpif( isset($errors) && count( $errors ) > 0) {foreach( $errors as $error ){echo '<p class="wpwa_frm_error">'. $error .'</p>';}}?>HTML Code for Form</div><?php get_footer(); ?> We can include the default header and footer using the get_header and get_footer functions, respectively. After the header, we will include a display area for the error messages generated in registration. Then, we have the HTML form, as shown in the following code: <form id='registration-form' method='post' action='<?php echoget_site_url() . '/user/register'; ?>'><ul><li><label class='wpwa_frm_label'><?php echo__('Username','wpwa'); ?></label><input class='wpwa_frm_field' type='text'id='wpwa_user' name='wpwa_user' value='' /></li><li><label class='wpwa_frm_label'><?php echo __('Email','wpwa'); ?></label><input class='wpwa_frm_field' type='text'id='wpwa_email' name='wpwa_email' value='' /></li><li><label class='wpwa_frm_label'><?php echo __('UserType','wpwa'); ?></label><select class='wpwa_frm_field' name='wpwa_user_type'><option <?php echo __('Follower','wpwa');?></option><option <?php echo __('Developer','wpwa');?></option><option <?php echo __('Member','wpwa');?></option></select></li><li><label class='wpwa_frm_label' for=''>&nbsp;</label><input type='submit' value='<?php echo__('Register','wpwa'); ?>' /></li></ul></form> As you can see, the form action is set to a custom route called user/register to be handled through the front controller. Also, we have added an extra field called user type to choose the preferred user type on registration. You might have noticed that we used wpwa as the prefix for form element names, element IDs, as well as CSS classes. Even though it's not a must to use a prefix, it can be highly effective when working with multiple third-party plugins. A unique plugin-specific prefix avoids or limits conflicts with other plugins and themes. We will get a screen similar to the following one, once we access the /user/register link in the browser: Once the form is submitted, we have to create the user based on the application requirements. Planning the registration process In this application, we have opted to build a complex registration process in order to understand the typical requirements of web applications. So, it's better to plan it upfront before moving into the implementation. Let's build a list of requirements for registration: The user should be able to register as any of the given user roles The activation code needs to be generated and sent to the user The default notification on successful registration needs to be customized to include the activation link Users should activate their account by clicking the link So, let's begin the task of registering users by displaying the registration form as given in the following code: public function register_user() {if ( !is_user_logged_in() ) {include dirname(__FILE__) . '/templates/registertemplate.php';exit;}} Once user requests /user/register, our controller will call the register_user function using the do_action call. In the initial request, we need to check whether a user is already logged in using the is_user_logged_in function. If not, we can directly include the registration template located inside the templates folder to display the registration form. WordPress templates can be included using the get_template_part function. However, it doesn't work like a typical template library, as we cannot pass data to the template. In this technique, we are including the template directly inside the function. Therefore, we have access to the data inside this function. Handling registration form submission Once the user fills the data and clicks the submit button, we have to execute quite a few tasks in order to register a user in WordPress database. Let's figure out the main tasks for registering a user: Validating form data Registering the user details Creating and saving activation code Sending e-mail notifications with an activate link In the registration form, we specified the action as /user/register, and hence the same register_user function will be used to handle form submission. Validating user data is one of the main tasks in form submission handling. So, let's take a look at the register_user function with the updated code: public function register_user() {if ( $_POST ) {$errors = array();$user_login = ( isset ( $_POST['wpwa_user'] ) ?$_POST['wpwa_user'] : '' );$user_email = ( isset ( $_POST['wpwa_email'] ) ?$_POST['wpwa_email'] : '' );$user_type = ( isset ( $_POST['wpwa_user_type'] ) ?$_POST['wpwa_user_type'] : '' );// Validating user dataif ( empty( $user_login ) )array_push($errors, __('Please enter a username.','wpwa') );if ( empty( $user_email ) )array_push( $errors, __('Please enter e-mail.','wpwa') );if ( empty( $user_type ) )array_push( $errors, __('Please enter user type.','wpwa') );}// Including the template} The following steps are to be performed: First, we will check whether the request is made as POST. Then, we get the form data from the POST array. Finally, we will check the passed values for empty conditions and push the error messages to the $errors variable created at the beginning of this function. Now, we can move into more advanced validations inside the register_user function, as shown in the following code: $sanitized_user_login = sanitize_user( $user_login );if ( !empty($user_email) && !is_email( $user_email ) )array_push( $errors, __('Please enter valid email.','wpwa'));elseif ( email_exists( $user_email ) )array_push( $errors, __('User with this email alreadyregistered.','wpwa'));if ( empty( $sanitized_user_login ) || !validate_username($user_login ) )array_push( $errors, __('Invalid username.','wpwa') );elseif ( username_exists( $sanitized_user_login ) )array_push( $errors, __('Username already exists.','wpwa') ); The steps to perform are as follows: First, we will use the existing sanitize_user function and remove unsafe characters from the username. Then, we will make validations on the e-mail to check whether it's valid and its existence status in the system. Both the email_exists and username_exists functions checks for the existence of an e-mail and username from the database. Once all the validations are completed, the errors array will be either empty or filled with error messages. In this scenario, we choose to go with the most essential validations for the registration form. You can add more advanced validation in your implementations in order to minimize potential security threats. In case we get validation errors in the form, we can directly print the contents of the error array on top of the form as it's visible to the registration template. Here is a preview of our registration screen with generated error messages: Also, it's important to repopulate the form values once errors are generated. We are using the same function for loading the registration form and handling form submission. Therefore, we can directly access the POST variables inside the template to echo the values, as shown in the updated registration form: <form id='registration-form' method='post' action='<?php echoget_site_url() . '/user/register'; ?>'><ul><li><label class='wpwa_frm_label'><?php echo__('Username','wpwa'); ?></label><input class='wpwa_frm_field' type='text'id='wpwa_user' name='wpwa_user' value='<?php echo isset($user_login ) ? $user_login : ''; ?>' /></li><li><label class='wpwa_frm_label'><?php echo __('Email','wpwa'); ?></label><input class='wpwa_frm_field' type='text'id='wpwa_email' name='wpwa_email' value='<?php echo isset($user_email ) ? $user_email : ''; ?>' /></li><li><label class='wpwa_frm_label'><?php echo __('User"Type','wpwa'); ?></label><select class='wpwa_frm_field' name='wpwa_user_type'><option <?php echo (isset( $user_type ) &&$user_type == 'follower') ? 'selected' : ''; ?> value='follower'><?phpecho __('Follower','wpwa'); ?></option><option <?php echo (isset( $user_type ) &&$user_type == 'developer') ? 'selected' : ''; ?>value='developer'><?php echo __('Developer','wpwa'); ?></option><option <?php echo (isset( $user_type ) && $user_type =='member') ? 'selected' : ''; ?> value='member'><?phpecho __('Member','wpwa'); ?></option></select></li><li><label class='wpwa_frm_label' for=''>&nbsp;</label><input type='submit' value='<?php echo__('Register','wpwa'); ?>' /></li></ul></form> Exploring the registration success path Now, let's look at the success path, where we don't have any errors by looking at the remaining sections of the register_user function: if ( empty( $errors ) ) {$user_pass = wp_generate_password();$user_id = wp_insert_user( array('user_login' =>$sanitized_user_login,'user_email' => $user_email,'role' => $user_type,'user_pass' => $user_pass));if ( !$user_id ) {array_push( $errors, __('Registration failed.','wpwa') );} else {$activation_code = $this->random_string();update_user_meta( $user_id, 'wpwa_activation_code',$activation_code );update_user_meta( $user_id, 'wpwa_activation_status', 'inactive');wp_new_user_notification( $user_id, $user_pass, $activation_code);$success_message = __('Registration completed successfully.Please check your email for activation link.','wpwa');}if ( !is_user_logged_in() ) {include dirname(__FILE__) . '/templates/login-template.php';exit;}} We can generate the default password using the wp_generate_password function. Then, we can use the wp_insert_user function with respective parameters generated from the form to save the user in the database. The wp_insert_user function will be used to update the current user or add new users to the application. Make sure you are not logged in while executing this function; otherwise, your admin will suddenly change into another user type after using this function. If the system fails to save the user, we can create a registration fail message and assign it to the $errors variable as we did earlier. Once the registration is successful, we will generate a random string as the activation code. You can use any function here to generate a random string. Then, we update the user with activation code and set the activation status as inactive for the moment. Finally, we will use the wp_new_user_notification function to send an e-mail containing the registration details. By default, this function takes the user ID and password and sends the login details. In this scenario, we have a problem as we need to send an activation link with the e-mail. This is a pluggable function and hence we can create our own implementation of this function to override the default behavior. Since this is a built-in WordPress function, we cannot declare it inside our plugin class. So, we will implement it as a standalone function inside our main plugin file. The full source code for this function will not be included here as it is quite extensive. I'll explain the modified code from the original function and you can have a look at the source code for the complete code: $activate_link = site_url() ."/user/activate/?wpwa_activation_code=$activate_code";$message = __('Hi there,') . 'rnrn';$message .= sprintf(__('Welcome to %s! Please activate youraccount using the link:','wpwa'), get_option('blogname')) .'rnrn';$message .= sprintf(__('<a href="%s">%s</a>','wpwa'),$activate_link, $activate_link) . 'rn';$message .= sprintf(__('Username: %s','wpwa'), $user_login) .'rn';$message .= sprintf(__('Password: %s','wpwa'), $plaintext_pass) .'rnrn'; We create a custom activation link using the third parameter passed to this function. Then, we modify the existing message to include the activation link. That's about all we need to change from the original function. Finally, we set the success message to be passed into the login screen. Now, let's move back to the register_user function. Once the notification is sent, the registration process is completed and the user will be redirected to the login screen. Once the user has the e-mail in their inbox, they can use the activation link to activate the account. Automatically log in the user after registration In general, most web applications uses e-mail confirmations before allowing users to log in to the system. However, there can be certain scenarios where we need to automatically authenticate the user into the application. A social network sign in is a great example for such a scenario. When using social network logins, the system checks whether the user is already registered. If not, the application automatically registers the user and authenticates them. We can easily modify our code to implement an automatic login after registration. Consider the following code: if ( !is_user_logged_in() ) {wp_set_auth_cookie($user_id, false, is_ssl());include dirname(__FILE__) . '/templates/login-template.php';exit;} The registration code is updated to use the wp_set_auth_cookie function. Once it's used, the user authentication cookie will be created and hence the user will be considered as automatically signed in. Then, we will redirect to the login page as usual. Since the user is already logged in using the authentication cookie, they will be redirected back to the home page with access to the backend. This is an easy way of automatically authenticating users into WordPress. Activating system users Once the user clicks on the activate link, redirection will be made to the /user/activate URL of the application. So, we need to modify our controller with a new case for activation, as shown in the following code: case 'activate':do_action( 'wpwa_activate_user' ); As usual, the definition of add_action goes in the constructor, as shown in the following code: add_action( 'wpwa_activate_user', array( $this,'activate_user') ); Next, we can have a look at the actual implementation of the activate_user function: public function activate_user() {$activation_code = isset( $_GET['wpwa_activation_code'] ) ?$_GET['wpwa_activation_code'] : '';$message = '';// Get activation record for the user$user_query = new WP_User_Query(array('meta_key' => ' wpwa_activation_code','meta_value' => $activation_code));$users = $user_query->get_results();// Check and update activation statusif ( !empty($users) ) {$user_id = $users[0]->ID;update_user_meta( $user_id, ' wpwa_activation_status','active' );$message = __('Account activated successfully.','wpwa');} else {$message = __('Invalid Activation Code','wpwa');}include dirname(__FILE__) . '/templates/info-template.php';exit;} We will get the activation code from the link and query the database for finding a matching entry. If no records are found, we set the message as activation failed or else, we can update the activation status of the matching user to activate the account. Upon activation, the user will be given a message using the info-template.php template, which consists of a very basic template like the following one: <?php get_header(); ?><div id='wpwa_info_message'><?php echo $message; ?></div><?php get_footer(); ?> Once the user visits the activation page on the /user/activation URL, information will be given to the user, as illustrated in the following screen: We successfully created and activated a new user. The final task of this process is to authenticate and log the user into the system. Let's see how we can create the login functionality. Creating a login form in the frontend The frontend login can be found in many WordPress websites, including small blogs. Usually, we place the login form in the sidebar of the website. In web applications, user interfaces are complex and different, compared to normal websites. Hence, we will implement a full page login screen as we did with registration. First, we need to update our controller with another case for login, as shown in the following code: switch ( $control_action ) {// Other casescase 'login':do_action( 'wpwa_login_user' );break;} This action will be executed once the user enters /user/login in the browser URL to display the login form. The design form for login will be located in the templates directory as a separate template called login-template.php. Here is the implementation of the login form design with the necessary error messages: <?php get_header(); ?><div id=' wpwa_custom_panel'><?phpif (isset($errors) && count($errors) > 0) {foreach ($errors as $error) {echo '<p class="wpwa_frm_error">' .$error. '</p>';}}if( isset( $success_message ) && $success_message != ""){echo '<p class="wpwa_frm_success">' .$success_message.'</p>';}?><form method='post' action='<?php echo site_url();?>/user/login' id='wpwa_login_form' name='wpwa_login_form'><ul><li><label class='wpwa_frm_label' for='username'><?phpecho __('Username','wpwa'); ?></label><input class='wpwa_frm_field' type='text'name='wpwa_username' value='<?php echo isset( $username ) ?$username : ''; ?>' /></li><li><label class='wpwa_frm_label' for='password'><?phpecho __('Password','wpwa'); ?></label><input class='wpwa_frm_field' type='password'name='wpwa_password' value="" /></li><li><label class='wpwa_frm_label' >&nbsp;</label><input type='submit' name='submit' value='<?php echo__('Login','wpwa'); ?>' /></li></ul></form></div><?php get_footer(); ?> Similar to the registration template, we have a header, error messages, the HTML form, and the footer in this template. We have to point the action of this form to /user/login. The remaining code is self-explanatory and hence I am not going to make detailed explanations. You can take a look at the preview of our login screen in the following screenshot: Next, we need to implement the form submission handler for the login functionality. Before this, we need to update our plugin constructor with the following code to define another custom action for login: add_action( 'wpwa_login_user', array( $this, 'login_user' ) ); Once the user requests /user/login from the browser, the controller will execute the do_action( 'wpwa_login_user' ) function to load the login form in the frontend. Displaying the login form We will use the same function to handle both template inclusion and form submission for login, as we did earlier with registration. So, let's look at the initial code of the login_user function for including the template: public function login_user() {if ( !is_user_logged_in() ) {include dirname(__FILE__) . '/templates/login-template.php';} else {wp_redirect(home_url());}exit;} First, we need to check whether the user has already logged in to the system. Based on the result, we will redirect the user to the login template or home page for the moment. Once the whole system is implemented, we will be redirecting the logged in users to their own admin area. Now, we can take a look at the implementation of the login to finalize our process. Let's take a look at the form submission handling part of the login_user function: if ( $_POST ) {$errors = array();$username = isset ( $_POST['wpwa_username'] ) ?$_POST['wpwa_username'] : '';$password = isset ( $_POST['wpwa_password'] ) ?$_POST['wpwa_password'] : '';if ( empty( $username ) )array_push( $errors, __('Please enter a username.','wpwa') );if ( empty( $password ) )array_push( $errors, __('Please enter password.','wpwa') );if(count($errors) > 0){include dirname(__FILE__) . '/templates/login-template.php';exit;}$credentials = array();$credentials['user_login'] = $username;$credentials['user_login'] = sanitize_user($credentials['user_login'] );$credentials['user_password'] = $password;$credentials['remember'] = false;// Rest of the code} As usual, we need to validate the post data and generate the necessary errors to be shown in the frontend. Once validations are successfully completed, we assign all the form data to an array after sanitizing the values. The username and password are contained in the credentials array with the user_login and user_password keys. The remember key defines whether to remember the password or not. Since we don't have a remember checkbox in our form, it will be set to false. Next, we need to execute the WordPress login function in order to log the user into the system, as shown in the following code: $user = wp_signon( $credentials, false );if ( is_wp_error( $user ) )array_push( $errors, $user->get_error_message() );elsewp_redirect( home_url() ); WordPress handles user authentication through the wp_signon function. We have to pass all the credentials generated in the previous code with an additional second parameter of true or false to define whether to use a secure cookie. We can set it to false for this example. The wp_signon function will return an object of the WP_User or the WP_Error class based on the result. Internally, this function sets an authentication cookie. Users will not be logged in if it is not set. If you are using any other process for authenticating users, you have to set this authentication cookie manually. Once a user is successfully authenticated, a redirection will be made to the home page of the site. Now, we should have the ability to authenticate users from the login form in the frontend. Checking whether we implemented the process properly Take a moment to think carefully about our requirements and try to figure out what we have missed. Actually, we didn't check the activation status on log in. Therefore, any user will be able to log in to the system without activating their account. Now, let's fix this issue by intercepting the authentication process with another built-in action called authenticate, as shown in the following code: public function authenticate_user( $user, $username, $password ) {if(! empty($username) && !is_wp_error($user)){$user = get_user_by('login', $username );if (!in_array( 'administrator', (array) $user->roles ) ) {$active_status = '';$active_status = get_user_meta( $user->data->ID, 'wpwa_activation_status', true );if ( 'inactive' == $active_status ) {$user = new WP_Error( 'denied', __('<strong>ERROR</strong>:Please activate your account.','wpwa') );}}}return $user;} This function will be called in the authentication action by passing the user, username, and password variables as default parameters. All the user types of our application need to be activated, except for the administrator accounts. Therefore, we check the roles of the authenticated user to figure out whether they are admin. Then, we can check the activation status of other user types before authenticating. If an authenticated user is in inactive status, we can return the WP_Error object and prevent authentication from being successful. Last but not least, we have to include the authenticate action in the controller, to make it work as shown in the following code: add_filter( 'authenticate', array( $this, 'authenticate_user' ), 30, 3 ); This filter is also executed when the user logs out of the application. Therefore, we need to consider the following validation to prevent any errors in the logout process: if(! empty($username) && !is_wp_error($user)) Now, we have a simple and useful user registration and login system, ready to be implemented in the frontend of web applications. Make sure to check login- and registration-related plugins from the official repository to gain knowledge of complex requirements in real-world scenarios. Time to practice In this article, we implemented a simple registration and login functionality from the frontend. Before we have a complete user creation and authentication system, there are plenty of other tasks to be completed. So, I would recommend you to try out the following tasks in order to be comfortable with implementing such functionalities for web applications: Create a frontend functionality for the lost password Block the default WordPress login page and redirect it to our custom page Include extra fields in the registration form Make sure to try out these exercises and validate your answers against the implementations provided on the website for this book. Summary In this article, we looked at how we can customize the built-in registration and login process in the frontend to cater to advanced requirements in web application development. By now, you should be capable of creating custom routers for common modules, implement custom controllers with custom template systems, and customize the existing user registration and authentication process. Resources for Article: Further resources on this subject: Web Application Testing [Article] Creating Blog Content in WordPress [Article] WordPress 3: Designing your Blog [Article]
Read more
  • 0
  • 0
  • 11267
article-image-introduction-nmap-scripting-engine
Packt
02 Jun 2015
5 min read
Save for later

Introduction to the Nmap Scripting Engine

Packt
02 Jun 2015
5 min read
In this article by David Shaw, author of the book Nmap Essentials, we will see that although being able to conduct port scans is an integral part of using the Nmap suite of tools, the developers of Nmap created a very powerful engine that's built into the tool: the Nmap Scripting Engine (NSE). This article introduces the NSE, and covers all the topics needed to use reliably-written scripts in the Nmap script repository, in order to conduct reconnaissance scans that include much more than just what ports are open and which services are listening. In this article, we will cover: The history of the NSE How the NSE works (For more resources related to this topic, see here.) The history of the NSE By the mid-2000s, Nmap had established itself as the clear leader in port scanning tools—and security tools in general—whether open source or not. Although it's a constant battle to continually innovate and optimize, Nmap can only be considered as an extremely successful project. Due to its popularity, and the fact that it's an open source project with a relatively high profile, Nmap was selected to participate in Google Summer of Code several times. Google Summer of Code is a software development internship/association project, during which students are selected and put on open source software teams to build new features into existing projects. In May 2006—when the currently released version of Nmap was only 4.0—Nmap was selected for its second Summer of Code season. The previous year, in 2005, several improvements had been made through the students' coding for the Nmap project: the students had written a contemporary implementation of Netcat (called Ncat), upgraded the OS detection for Nmap to its second (and much better) generation, and created a small, simplified GUI that would later become Zenmap. For this second run through, after an extremely successful first summer, the participant developers were even more ambitious. Since Nmap clearly had an excellent set of features, why not make those features extendable by the greater community? New vulnerabilities and scanning techniques were being pioneered on a very frequent basis, and full Nmap releases couldn't keep up with the things that security professionals needed to assess. Every time a new vulnerability came out, security professionals (and malicious hackers!) would scan for vulnerable services with Nmap, but could only test whether software versions were vulnerable by using manual analysis: clearly, not a very efficient use of time. Because of the new resources granted by Google Summer of Code developers, an arbitrary scripting framework was created that allows users to trigger additional checks based on certain open ports or services. This means, for example, that if you're looking for a specific file on all web servers—robots.txt, for example—you can easily create a script that can check for it on all HTTP and HTTPS services. The NSE (and the inclusion of Nmap scripts in default installations of Nmap) truly revolutionized the versatility of the tool suite. After months of hard work, the NSE was released in December 2006, packaged with Nmap release 4.21ALPHA1. The scripts that come packaged with the NSE have continued to grow in complexity and usability, and are excellent resources to turn Nmap into a fully-featured security tool suite. The inner working of the NSE The NSE is a framework that runs code written in the programming language Lua with specific flags that the engine can parse. Lua is a lightweight, fast, and interpreted programming language—one that has the most fame for scripting user interfaces for computer games such as World of Warcraft—that has a similar syntax to other contemporary interpreted languages. If you've ever seen code written in Python or Ruby, Lua won't seem too alien to you. The preceding screenshot shows an Nmap script that identifies information about Bitcoins (written by Patrik Karlsson). Don't worry if you don't understand it yet but you can see that the code used to generate a relatively complex Nmap script looks very simple. This is the whole point of the NSE! Where security engineers and system administrators used to have to export Nmap results, find the information they are looking for and then use third-party tools to assist them; they are now able to either find a script that serves their purposes, or write a simple one themselves. Many penetration testers can leverage the Nmap scripting language to even weaponize the tool for security exploits. Summary This article introduced the NSE, which can be one of the most useful, versatile, and engaging features of the Nmap tool suite. We should now be able to launch scans that do more than just port and service versions—Nmap scripts can actually interact with the services listening, and in some cases can even exploit vulnerabilities! In this article, we covered the history of the NSE, and how NSE works. Resources for Article: Further resources on this subject: Target Exploitation [article] Enabling and configuring SNMP on Windows [article] Gathering all rejects prior to killing a job [article]
Read more
  • 0
  • 0
  • 25292

article-image-filtering-sequence
Packt
02 Jun 2015
5 min read
Save for later

Filtering a sequence

Packt
02 Jun 2015
5 min read
In this article by Ivan Morgillo, the author of RxJava Essentials, we will approach Observable filtering with RxJava filter(). We will manipulate a list on installed app to show only a subset of this list, according to our criteria. (For more resources related to this topic, see here.) Filtering a sequence with RxJava RxJava lets us use filter() to keep certain values that we don't want, out of the sequence that we are observing. In this example, we will use a list, but we will filter it, passing to the filter() function the proper predicate to include only the values we want. We are using loadList() to create an Observable sequence, filter it, and populate our adapter: private void loadList(List<AppInfo> apps) {    mRecyclerView.setVisibility(View.VISIBLE);      Observable.from(apps)           .filter((appInfo) ->                appInfo.getName().startsWith("C"))            .subscribe(new Observer<AppInfo>() {                @Override                public void onCompleted() {                    mSwipeRefreshLayout.setRefreshing(false);                }                  @Override                public void onError(Throwable e) {                    Toast.makeText(getActivity(), "Something went                      south!", Toast.LENGTH_SHORT).show();                    mSwipeRefreshLayout.setRefreshing(false);                }                  @Override                public void onNext(AppInfo appInfo) {                    mAddedApps.add(appInfo);                    mAdapter.addApplication(mAddedApps.size() - 1,                   appInfo);                }            }); } We have added the following line to the loadList() function: .filter((appInfo) -> appInfo.getName().startsWith("C")) After the creation of the Observable, we are filtering out every emitted element that has a name starting with a letter that is not a C. Let's have it in Java 7 syntax too, to clarify the types here: .filter(new Func1<AppInfo, Boolean>() {    @Override    public Boolean call(AppInfo appInfo) {        return appInfo.getName().startsWith("C");    } }) We are passing a new Func1 object to filter(), that is, a function having just one parameter. The Func1 object has an AppInfo object as parameter type and it returns a Boolean object. The filter() function will return true only if the condition will be verified. At that point, the value will be emitted and received by all the Observers. As you can imagine, filter() is critically useful to create the perfect sequence that we need from the Observable sequence we get. We don't need to know the source of the Observable sequence or why it's emitting tons of different elements. We just want a useful subset of those elements to create a new sequence we can use in our app. This mindset enforces the separation and the abstraction skills of our coding day life. One of the most common use of filter() is filtering null objects: .filter(new Func1<AppInfo, Boolean>() {    @Override    public Boolean call(AppInfo appInfo) {        return appInfo != null;    } }) This seems to be trivial and there is a lot of boilerplate code for something that trivial, but this will save us from checking for null values in the onNext() call, letting us focus on the actual app logic. As result of our filtering, the next figure shows the installed apps list, filtered by name starting with C: Summary In this article, we introduced RxJava filter() function and we used it in a real-world example in an Android app. RxJava offers a lot more functions allowing you to filter and manipulate Observable sequences. A comprehensive list of methods, scenarios and example are available in RxJava that will drive you in a step-by-step journey, from the basic of the Observer pattern to composing Observables and querying REST API using RxJava. Resources for Article: Further resources on this subject: Android Native Application API [article] Android Virtual Device Manager [article] Putting It All Together – Community Radio [article]
Read more
  • 0
  • 0
  • 7574
Modal Close icon
Modal Close icon