Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-rundown-example
Packt
06 Aug 2015
10 min read
Save for later

Rundown Example

Packt
06 Aug 2015
10 min read
In this article by Miguel Oliveira, author of the book Microsoft System Center Orchestrator 2012 R2 Essentials, we will learn to get started on the creational process. We will be able to driven on how to address and connect all the pieces together in order to successfully create a Runbook. (For more resources related to this topic, see here.) Runbook for Active Directory User Account Provisioning Now, for this Runbook, we've been challenged by our HR department to come out with a solution for them to be able to create new user accounts for recently joined employees. The request was specifically drawn with the target for them (HR) to be able to: Provide the first and last name Provide the department name Get that user added to the proper department group and get all the information of the user Send the newly created account to the IT department to provide a machine, a phone, and an e-mail address With these requirements at the back of our heads, let's see which activities we need to get into our Runbook. I'll place these in steps for this example, so it's easy to follow: Data input: So, we definitely need an activity to allow the HR to feed the information into the Runbook. For this, we can use the Initialize Data activity (Runbook control category), or we could work along with a monitored file and read the data from a line, or even from a SharePoint list. But to keep it simple for now, let's use the Initialize Data. Data processing: In here, the idea would be to retrieve the Department given by the HR and process it to retrieve the group (the Get Group activity from the Active Directory category) and include our user (the Add User To Group activity from the Active Directory category) into the group we've retrieved; but in between, we'll need to create the user account (Create User activity from the Active Directory category), and generate a password (the Generate Random Text activity from the Utilities category). Data output: At the very end of all this, send an e-mail (the Send Email activity from the Email category) back to HR with the account information and status of its creation and inform our IT department (for security reasons) too about the account that has been created. We're also going to closely watch for errors with a few activities that will show us whether an error occurs. Let's see the look of this Runbook from a structured point (and actually almost how it's going to look in the end) and we'll detail the activities and options within them step by step from there. Here's the aspect of the Runbook structured with the activities properly linked between them allowing the data bus to flow and transport the published data from the beginning to the end: As described in the steps, we start with an Initialize Data activity in which we're going to request some inputs from the person executing the Runbook. To create a user, we'll need his First Name and Last Name and also the Department. For that, we'll fill in the following information in the Fetch User Details activity seen in the previous screenshot. For the sake of avoiding errors, the HR department should have a proper list of departments that we know will translate into a proper group in the upcoming activities. After filling the information, the processing of the information begins and with it, our automation process that will find the group for that department, create our user account, set a password, change password on the first login, add the user to the group, and enable the account. For that, we'll start with the Get Group activity in which we'll fill in the following: Set up the proper configuration in the Get Group Properties window for the Active Directory Domain in which you'll want this to execute, and in the Filters options, set to filter Sam Account Name of the group as the Department filled by the HR department. Now we'll set another prerequisite to create the account—the password! For this, we'll get the Generate Random Text activity and set it with the following parameters: These values should be set to accordingly accommodate your existing security policy and minimum password requirements for your domain. These previous activities are all we need to have the necessary values to proceed with the account creation by using the Create User activity. These should be the parameters filled in. All of these parameters are actually being retrieved from the Published Data from the last activities. As the list is long, we'll detail them here for your better understanding. Everything that's between {} is Published Data: Common Name: {First Name from "Fetch User Details"} {Last Name from "Fetch User Details"} Department: {Display Name from "Get Group"} Display Name: {First Name from "Fetch User Details"} {Last Name from "Fetch User Details"} First Name: {First Name from "Fetch User Details"} Last Name: {Last Name from "Fetch User Details"} Password: {Random text from "Generate Random Text"} User Must Change Password: True SAM Account Name: {First Name from "Fetch User Details"}.{Last Name from "Fetch User Details"} User Principal Name: {First Name from "Fetch User Details"}.{Last Name from "Fetch User Details"}@test.local Email: {First Name from "Fetch User Details"}.{Last Name from "Fetch User Details"}@test.com Manager: {Managed By from "Get Group"} As said previously, most of the data comes from the Published Data and we've created subscriptions in all these fields to retrieve it. The only two fields that have data different from Published Data are the User Must Change Password, User Principal Name (UPN), and Email. The User Must Change Password is a Boolean field that will display only Yes or No, and in the UPN and e-mail we've added the domain information (@test.local and @test.com) to the Published Data. Depending on the Create User activity's output, it will trigger a different activity. For now, let's assume that the activity returns a success on the execution, this will make the Runbook follow the smart link that goes on with the Get User activity. The Get User activity will retrieve all the information concerning the newly created user account that will be useful for the next activities down the line. In order to retrieve the proper information, we'll need to configure the following in the Filters area within the activity: You'll need to add a filter, selecting Sam Account Name and Relation as Equals for Value under the subscribed data from Sam Account Name that comes out of the Create User activity. From here, we'll link with the activity that Add User to Group (here renamed to Add User to Department) and within that activity we're going to specify the group and the user so the activity can add the user into the group. It should look exactly like the screenshot that follows: We'll once again assume that everything's running as expected and prepare our next activity that is to enable user account and for this one, we'll use the Enable User activity. The configuration of the activity can be seen in the next screenshot: Once again, we'll get the information out of the Published Data and feed it into the activity. After this activity is completed, we're going to log the execution and information output into the platform with the Send Platform Event activity so we can see any necessary information available from the execution. Here is a sample of the configuration for the message output: To get the Details text box expanded this way, right-click on it and select Expand… from the menu, then you can format and include the data that you feel is more important for you to see it. Then we'll send an e-mail for the HR team with the account creation details so they can communicate to the newly arrived employee and another e-mail for the IT department only with the account name and the department (plus the group name) for security reasons. Here are the samples of these two activities, starting with the HR e-mail: Let's go point by point on this configuration sample. In the Details section, we've settled the following: Subject: Account {Sam Account Name from "Get User"} Created Recipients: to: hr.dept@test.com Message: The message description is given in the following screenshot: Message option that consists of choosing the Priority of the message (high, normal, or low), and set the necessary SMTP authentication parameters (account, password, and domain) so you can send the message through your e-mail service. If you have an application e-mail service relay, you can leave the SMTP authentication without any configuration. In connect Connect option, you'll find the place to configure the e-mail address that you want the user to see and the SMTP connection (server, port, and SSL) through which you'll send your messages. Now our Send Email IT activity will be more or less the same, with the exception for the destination and the message itself. It should be something a little more or less like the following screenshot: By now you've got the idea and you're pumped to create new Runbooks, but we still have to do some error control on some of these tasks; although they're chained, if one fails, everything fails. So for this Runbook, we'll create error control on two tasks that if we observe well, are more or less the only two that can fail! One is the Create User Account activity, which can fail due to the user account existing or by some issue with privileges on its creation. The other is Add User To Department that might fail to add the user into the group for some reason. So for this, we'll create two notification activities called Send Event and Log Message that we'll rename to User Account Error and Group Error respectively. If we look into the User Account Error activity, we'll set something more or less like the following screenshot: A quick explanation of the settings is as follows: Computer: This is the computer to which Windows Event Viewer we're going to write the event into. In this case, we'll concentrate over our Management Server, but you might have a logging server for this. Message: The message gets logged into the windows event viewer. Here, we can subscribe for the error data coming out of the last activity executed. Severity: This is usually an Error. You can set Information or Warning if you are deploying these activities to keep a track on each given step. So for our Group Error Properties the philosophy will be the same. Now that we are all set, we'll need to work our smart links so that they can direct the Runbook execution flow into the following activity depending on the previous activity output (success or error). In the end, your Runbook should look a little bit more like this: That's it for the Runbook for Active Directory User Account Provisioning. We'll now speed up a little bit more on the other Runbooks as you'll have a much clearer understanding after this first sample. Summary We've seen one of the Runbook samples these Runbooks should serve as the base for real case scenarios in the environment and help us in the creativity process and also to better understand the configurations necessary on each activity in order to proceed successfully. Resources for Article: Further resources on this subject: Unpacking System Center 2012 Orchestrator [article] Working with VMware Infrastructure [article] Unboxing Docker [article]
Read more
  • 0
  • 0
  • 1783

article-image-deploy-toshi-bitcoin-node-docker-aws
Alex Leishman
05 Aug 2015
8 min read
Save for later

Deploy Toshi Bitcoin Node with Docker on AWS

Alex Leishman
05 Aug 2015
8 min read
Toshi is an implementation of the Bitcoin protocol, written in Ruby and built by Coinbase in response to their fast growth and need to build Bitcoin infrastructure at scale. This post will cover: How to deploy Toshi to an Amazon AWS instance with Redis and PostgreSQL using Docker. How to query the data to gain insights into the Blockchain To get the most out of this post you will need some basic familiarity with Linux, SQL and AWS. Most Bitcoin nodes run “Bitcoin Core”, which is written in C++ and serves as the de-facto standard implementation of the Bitcoin protocol. Its advantages are that it is fast for light-medium use and efficiently stores the transaction history of the network (the blockchain) in LevelDB, a key-value datastore developed at Google. It has wallet management features and an easy-to-use JSON RPC interface for communicating with other applications. However, Bitcoin Core has some shortcomings that make it difficult to use for wallet/address management in at-scale applications. Its database, although efficient, makes it impossible or very difficult to perform certain queries on the blockchain. For example, if you wanted to get the balance of any bitcoin address, you would have to write a script to parse the blockchain separately to find the answer. Additionally, Bitcoin Core starts to significantly slow down when it has to manage and monitor large amounts of addresses (> ~10^7). For a web app with hundreds of thousands of users, each regularly generating new addresses, Bitcoin Core is not ideal. Toshi attempts to address the flexibility and scalability issues facing Bitcoin Core by parsing and storing the entire blockchain in an easily-queried PostgreSQL database. Here is a list of tables in Toshi’s DB: schema.txt We will see the direct benefit of this structure when we start querying our data to gain insights from the blockchain. Since Toshi is written in Ruby it has the added advantage of being developer friendly and easy to customize. The main downside of Toshi is the need for ~10x more storage than Bitcoin core, as storing and indexing the blockchain in well-indexed relational DB requires significantly more disk space. First we will create an instance on Amazon AWS. You will need at least 300GB of storage for the Postgres database. Be sure to auto assign a public IP and allow TLS incoming connections on Port 5000, as this is how we will access the Toshi web interface. Once you get your instance up and running, SSH into the instance using the commands given by Amazon. First we will set up a user for Toshi: ubuntu@ip-172-31-62-77:~$ sudo adduser toshi Adding user `toshi' ... Adding new group `toshi' (1001) ... Adding new user `toshi' (1001) with group `toshi' ... Creating home directory `/home/toshi' ... Copying files from `/etc/skel' ... Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully Changing the user information for toshi Enter the new value, or press ENTER for the default Full Name []: Room Number []: Work Phone []: Home Phone []: Other []: Is the information correct? [Y/n] Y Then we will add the new user to the sudoers group and switch to that user: ubuntu@ip-172-31-62-77:~$ sudo adduser toshi sudo Adding user `toshi' to group `sudo' ... Adding user toshi to group sudo Done. ubuntu@ip-172-31-62-77:~$ su – toshi toshi@ip-172-31-62-77:~$ Next, we will install Docker and all of its dependencies through an automated script available on the Docker website. This will provision our instance with the necessary software packages. toshi@ip-172-31-62-77:~$ curl -sSL https://get.docker.com/ubuntu/ | sudo sh Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --homedir ..... Then we will clone the Toshi repo from Github and move into the new directory: toshi@ip-172-31-62-77:~$ git clone https://github.com/coinbase/toshi.gittoshi@ip-172-31-62-77:~$ cd toshi/ Next, build the coinbase/toshi Docker image from the Dockerfile located in the /toshi directory. Don’t forget the dot at the end of the command!! toshi@ip-172-31-62-77:~/toshi$ sudo docker build -t=coinbase/toshi .Sending build context to Docker daemon 13.03 MB Sending build context to Docker daemon … … … Removing intermediate container c15dd6c961c2 Step 3 : ADD Gemfile /toshi/Gemfile INFO[0120] Error getting container dbc7c41625c49d99646e32c430b00f5d15ef867b26c7ca68ebda6aedebf3f465 from driver devicemapper: Error mounting '/dev/mapper/docker-202:1-524950-dbc7c41625c49d99646e32c430b00f5d15ef867b26c7ca68ebda6aedebf3f465' on '/var/lib/docker/devicemapper/mnt/dbc7c41625c49d99646e32c430b00f5d15ef867b26c7ca68ebda6aedebf3f465': no such file or directory Note, you might see ‘Error getting container’ when this runs. If so don’t worry about it at this point. Next, we will build and run our Redis and Postgres containers. toshi@ip-172-31-62-77:~/toshi$ sudo docker run --name toshi_db -d postgres toshi@ip-172-31-62-77:~/toshi$ sudo docker run --name toshi_redis -d redis This will build and run Docker containers named toshi_db and toshi_redis based on standard postgres and redis images pulled from Dockerhub. The ‘-d’ flag indicates that the container will run in the background (daemonized). If you see ‘Error response from daemon: Cannot start container’ error while running either of these commands, simply run ‘sudo docker start toshi_redis [or toshi_postgres]’ again. To ensure that our containers are running properly, run: toshi@ip-172-31-62-77:~$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4de43ccc8e80 redis:latest "/entrypoint.sh redi 7 minutes ago Up 3 minutes 6379/tcp toshi_redis 6de0418d4e91 postgres:latest "/docker-entrypoint. 8 minutes ago Up 2 minutes 5432/tcp toshi_db You should see both containers running, along with their port numbers. When we run our Toshi container we need to tell it where to find the Postgres and Redis containers, so we must find the toshi_db and toshi_redis IP addresses. Remember we have not run a Toshi container yet, we only built the image from the Dockerfile. You can think of a container as a running version of an image. To learn more about Docker see the docs. toshi@ip-172-31-62-77:~$ sudo docker inspect toshi_db | grep IPAddress "IPAddress": "172.17.0.3", toshi@ip-172-31-62-77:~$ sudo docker inspect toshi_redis | grep IPAddress "IPAddress": "172.17.0.2", Now we have everything we need to get our Toshi container up and running. To do this run: sudo docker run --name toshi_main -d -p 5000:5000 -e REDIS_URL=redis://172.17.0.2:6379 -e DATABASE_URL=postgres://postgres:@172.17.0.3:5432 -e TOSHI_ENV=production coinbase/toshi sh -c 'bundle exec rake db:create db:migrate; foreman start' Be sure to replace the IP addresses in the above command with your own. This creates a container named ‘toshi_main’, runs it as a daemon (-d) and sets three environment variables in the container (-e) which are required for Toshi to run. It also maps port 5000 inside the container to port 5000 of our host (-p). Lastly it runs a shell script in the container (sh –c) which creates and migrates the database, then starts the Toshi web server. To see that it has started properly run: toshi@ip-172-31-62-77:~$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 017c14cbf432 coinbase/toshi:latest "sh -c 'bundle exec 6 seconds ago Up 5 seconds 0.0.0.0:5000->5000/tcp toshi_main 4de43ccc8e80 redis:latest "/entrypoint.sh redi 43 minutes ago Up 38 minutes 6379/tcp toshi_redis 6de0418d4e91 postgres:latest "/docker-entrypoint. 43 minutes ago Up 38 minutes 5432/tcp toshi_db If you have set your AWS security settings properly, you should be able to see the syncing progress of Toshi in your browser. Find your instance’s public IP address from the AWS console and then point your browser there using port 5000. For example: ‘http://54.174.195.243:5000/’. You can also see the logs of our Toshi container by running: toshi@ip-172-31-62-77:~$ sudo docker logs –f toshi_main That’s it! We’re all up and running. Be prepared to wait a long time for the blockchain to finish syncing. This could take more than a week or two, but you can start playing around with the data right away through the GUI to get a sense of the power you now have. About the Author Alex Leishman is a software engineer who is passionate about Bitcoin and other digital currencies. He works at MaiCoin.com where he is helping to build the future of money.
Read more
  • 0
  • 0
  • 4602

article-image-animation-features-unity-5
Packt
05 Aug 2015
16 min read
Save for later

Animation features in Unity 5

Packt
05 Aug 2015
16 min read
In this article by Valera Cogut, author of the book Unity 5 for Android Essentials you will learn new Mecanim animation features and awesome new audio features in Unity 5. (For more resources related to this topic, see here.) New Mecanim animation features in Unity 5 Unity 5 contains some new awesome possibilities for the Mecanim animation system. Let's look at the new shiny features known in Unity 5. State machine behavior Now, you can inherit your classes from StateMachineBehaviour in order to be able to attach them to your Mecanim animation states. This class has the following very important callbacks: OnStateEnter OnStateUpdate OnStateExit OnStateMove OnStateIK The StateMachineBehaviour scripts behave like MonoBehaviour scripts, which you can attach on as many objects as you wish; the same is true for StateMachineBehaviour. You can use this solution with or without any animation at all. State machine transition Unity 5 introduced a new awesome feature for Mecanim animation systems known as state machine transitions in order to construct a higher abstraction level. In addition, entry and exit nodes were created. By these two additional nodes to StateMachine, you can now branch your start or finish state depending on your special conditions and requirements. These mixes of transitions are possible: StateMachine | StateMachine, State | StateMachine, State | State. In addition, you also can reorder your layers or parameters. This is the new UI that allows it by a very simple and useful drag-n-drop method. Asset creation API One more awesome possibility in Unity 5 was introduced using scripts in Unity Editor in order to programmatically create assets, such as layers, controllers, states, StateMachine, and blend trees. You can use different solutions with a high-level API provided by Unity engine maintenance and a low-level API, where you should manage all your assets manually. You can find more about both API versions on Unity documentation pages. Direct blend tree Another new feature that was introduced with the new BlendTree type is known as direct. It provides direct mapping and animator parameters to the weight of BlendTree children. Possibilities with Unity 5 have been enhanced with two useful features for Mecanim animation system: Camera can scale, orbit, and pan You can access your parameters in runtime Programmatically creating assets by Unity 5 API The following code snippets are self-explanatory, pretty simple, and straightforward. I list them just as a very useful reminder. Creating the controller To create a controller you can use the following code: var animatorController = UnityEditor.Animations.AnimatorController.CreateAnimatorControllerAtPath ("Assets/Your/Folder/Name/state_machine_transitions.controller"); Adding parameters To add parameters to the controller, you can use this code: animatorController.AddParameter("Parameter1", UnityEditor.Animations.AnimatorControllerParameterType.Trigger); animatorController.AddParameter("Parameter2", UnityEditor.Animations.AnimatorControllerParameterType.Trigger); animatorController.AddParameter("Parameter3″, UnityEditor.Animations.AnimatorControllerParameterType.Trigger); Adding state machines To add state machines, you can use the following code: var sm1 = animatorController.layers[0].stateMachine; var sm2 = sm1.AddStateMachine("sm2"); var sm3 = sm1.AddStateMachine("sm3"); Adding states To add states, you can use the code given here: var s1 = sm2.AddState("s1″); var s2 = sm3.AddState("s2″); var s3 = sm3.AddState("s3″); Adding transitions To add transitions, you can use the following code: var exitTransition = s1.AddExitTransition(); exitTransition.AddCondition(UnityEditor.Animations.AnimatorConditionMode.If, 0, "Parameter1"); exitTransition.duration = 0;   var transition1 = sm2.AddAnyStateTransition(s1); transition.AddCondition(UnityEditor.Animations.AnimatorConditionMode.If, 0, "Parameter2"); transition.duration = 0;   var transition2 = sm3.AddEntryTransition(s2); transition2.AddCondition(UnityEditor.Animations.AnimatorConditionMode.If, 0, "Parameter3″); sm3.AddEntryTransition(s3); sm3.defaultState = s2;   var exitTransition = s3.AddExitTransition(); exitTransition.AddCondition(UnityEditor.Animations.AnimatorConditionMode.If, 0, "Parameter3"); exitTransition.duration = 0;   var smt = rootStateMachine.AddStateMachineTransition(sm2, sm3); smt.AddCondition(UnityEditor.Animations.AnimatorConditionMode.If, 0, "Parameter2"); sm2.AddStateMachineTransition(sm1, sm3); Going deeper into new audio features Let's start with new amazing Audio Mixer possibilities. Now, you can do true submixing of audio in Unity 5. In the following figure, you can see a very simple example with different sound categories required in a game: Now in Unity 5, you can mix different sound collections within categories and tune up volume control and effects only once in a single place so that you can save a lot of time and effort. This new awesome audio feature in Unity 5 allows you to create a fantastic mood and atmosphere for your game. Each Audio Mixer can have a hierarchy of AudioGroups: The Audio Mixer can not only do a lot of useful things, but also mix different sound groups in one place. Different audio effects are applied sequentially in each AudioGroup. Now you're getting closer to the amazing, awesome, and shiny new features in Unity 5 for audio system! A callback script OnAudioFilterRead, which made possible the processing of samples directly into their scripts, previously was handled exclusively by the code. Unity now also supports custom plugins to create different effects. With these innovations, Unity 5 for audio system now has its own applications synthesizer, which has become much easier and more flexible than possible. Mood transitions As mentioned earlier, the mood of the game can be controlled with a mix of sound. This can be achieved with the involvement of new stems and music or ambient sounds. Another common way to accomplish this is to move the state of the mixture. A very effective way of taking mood where you want to go is by changing the volume section's mixture and transferring it to the different states of effect parameters. Inside, everything is the Audio Mixer's ability to identify pictures. Pictures capture the status of all parameters in Audio Mixer. Everything from investigative wet levels to AudioGroup tone levels can be captured and moved between the various parameters. You can even create a complex mixture of states between a whole bunch of pictures in your game, creating all kinds of possibilities and goals. Imagine installing all these things without having to write a line of code to the script. Physics and particle system effects in Unity 5 Physics for 2D and 3D in Unity are very similar, because they use the same concepts like Ias rigidbodies, joints, and colliders. However, Box2D has more features than Unity's 2D physics engine. It is not a problem to mix 2D and 3D physics engines (built-in, custom, third-party) in Unity. So, Unity provides an easy development way for your innovative games and applications. If you need to develop some real-life physics in your project, then you should not write your own library, framework, or engine, except specific requirements. However, you should try existing physics engines, libraries, or frameworks with many features already made. Let's start our introduction into Unity's built-in physics engine. In the case that you need to set your object under Unity's built-in physics management, you just need to attach the Rigidbody component to this object. After that, your object can collide with other entities in its world and gravity will have an affect on it. In other words, Rigidbody will be simulated physically. In your scripts, you can move any of your Rigidbodies by adding vector forces to them. It is not recommended to move the Transform component of a non-kinematic Rigidbody, because it will not collide correctly with other items. Instead, you can apply forces and torque to your Rigidbody. A Rigidbody can be used also to develop cars with wheel colliders and with some of your scripts to apply forces to it. Furthermore, a Rigidbody is used not only for vehicles, but also you can use it for any other physics issues such as airplanes, robots with various scripts for applying forces, and with joints. The most useful way to utilize a Rigidbody is to use it in collaboration with some primitive colliders (built-in in Unity) such as BoxCollider and SphereCollider. Next, we will show you two things to remember about Rigidbody: In your object's hierarchy, you must never have a child and its parent with the Rigidbody component together at the same time It is not recommended to scale Rigidbody's parent object One of the most important and fundamental components of physics in Unity is a Rigidbody component. This component activates physics calculations on the attached object. If you need your object to react to collisions( for example, while playing billiards, balls collide with each other and scatter in different directions) then you must also attach a Collider component on your GameObject. If you have attached a Rigidbody component to your object, then your object will move through the physics engine, and I recommend that you do not move your object by changing its position or rotation in the Transform component. If you need some way to move your object, you should apply the various forces acting on the object so that the Unity physics engine assumes all obligations for the calculation of collisions and moving dynamic objects. Also, in some situations, there is a need for a Rigidbody component, but your object must be moved only by changing its position or rotation properties in the Transform component. It is sometimes necessary to use components without Rigidbody calculating collisions of the object and its motion physics. That is, your object will move by your script or, for example, by running your animation. In order to solve this problem, you should just activate its IsKinematic property. Sometimes, it is required to use a combination of these two modes when IsKinematic is turned on and when it is turned off. You can create a symbiosis of these two modes, changing the IsKinematic parameter directly in your code or in your animation. Changing the IsKinematic property very often from your code or from your animation can be the cause of overhead in your performance. Therefore, you should use it very carefully and only when you really need it. A kinematic Rigidbody object is defined by the IsKinematic toggle option. If a Rigidbody is Kinematic, this object will not be affected by collisions, gravity, or forces. There is a Rigidbody component for 3D physics engine and an analogous Rigidbody2D for 2D physics engine. A kinematic Rigidbody can interact with other non-kinematic Rigidbodies. In the event of using kinematic Rigidbodies, you should translate their positions and rotation values of the Transform component by your scripts or animations. When there is a collision between Kinematic and non-kinematic Rigidbodies, then the Kinematic object will properly wake up non-kinematic Rigidbody. Furthermore, the first Rigidbody will apply friction to the second Rigidbody if the second object is on top of the first object. Let's list some possible usage examples of kinematic Rigidbodies: There are situations when you need your objects to be under physics management, but sometimes to be controlled explicitly from your scripts or animations. As an example, you can attach Rigidbodies to the bones of your animated personage and connect them with joints in order to utilize your entity as a ragdoll. If you are controlling your character by Unity's animation system, you should enable the IsKinematic checkbox. Sometimes you may require your hero to be affected by Unity's built-in physics engine if you are hitting the hero. In this case you should disable the IsKinematic checkbox. If you need a moving item that can push different items, yet not by itself. In case you have a moving platform and you need to place some Rigidbody objects on top, you ought to enable the IsKinematic checkbox rather than simply attaching a collider without a Rigidbody. You may need to enable the IsKinematic property of your Rigidbody object that is animated and has a genuine Rigidbody follower by utilizing one of the accessible joints. Earlier, I mentioned the collider, but now is the time to discuss this component in more detail. In the case of Unity, the physics engine can calculate collisions. You must specify geometric shapes for your object by attaching the Collider component. In most cases, the collider does not have to be the same shape as your mesh with many polygons. Therefore, it is desirable to use simple colliders, which will significantly improve your performance, otherwise with more complex geometric shapes you risk significantly increasing the computing time for physics collisions. Simple colliders in Unity are known as primitive colliders: BoxCollider, BoxCollider2D, SphereCollider, CircleCollider2D, and CapsuleCollider. Also, no one forbids you to combine different primitive colliders to create a more realistic geometric shape that the physics engine can handle very fast compared to MeshCollider. Therefore, to accelerate your performance, you should use primitive colliders wherever possible. You can also hang on to the child objects of different primitive colliders, which will change its position and rotation, depending on the parent Transform component. The Rigidbody component must be attached only to the GameObject root in the hierarchy of your entity. Unity provides a MeshCollider component for 3D physics and a PolygonCollider2D component for 2D physics. The MeshCollider component will use your object's mesh for its geometric shape. In PolygonCollider2D, you can edit directly in Unity and create any 2D geometry for your 2D physical computations. In order to react in collisions between different mesh colliders, you must enable a Convex property. You will certainly sacrifice performance for more accurate physics calculations, but if you have the right balance between quality and performance, then you can achieve good performance only through a proper approach. Objects are static when they have a Collider component without a Rigidbody component. Therefore, you should not move or rotate them by changing properties in their Transform component, because it will leave a heavy imprint on your performance as a physics engine should recalculate many polygons of various objects for right collisions and ray casts. Dynamic objects are those that have a Rigidbody component. Static objects (attached with the Collider component and without Rigidbody components) can interact with dynamic objects (attached with Collider and Rigidbody components). Furthermore, static objects will not be moved by collisions like dynamic objects. Also, Rigidbodies can sleep in order to increase performance. Unity provides the ability to control sleep in a Rigidbodies component directly in the code using following functions: Rigidbody.IsSleeping() Rigidbody.Sleep() Rigidbody.WakeUp() There are two variables characterized in the physics manager. You can open physics manager right from Unity menu here: Edit | Project Settings | Physics: Rigidbody.sleepVelocity: The default value is 0.14. This indicates lower limitations for linear velocity (from zero to infinity) below which objects will sleep. Rigidbody.sleepAngularVelocity: The default value is 0.14. This indicates lower limitations for angular velocity (from zero to infinity) below which objects will sleep. Rigidbodies awaken when: An alternate Rigidbody impacts the resting Rigidbody An alternate Rigidbody was joined through a joint At the point of adjusting a property of the Rigidbody At the point of adding force vectors A kinematic Rigidbody can wake the other sleeping Rigidbodies while static objects (attached with a Collider component and without a Rigidbody component) can't wake your sleeping Rigidbodies. The PhysX physics engine which is integrated into Unity works well on mobile devices, but mobile devices certainly have far fewer resources than powerful desktops. Let's look at a few points to optimize the physics engine in Unity: First of all, note that you can adjust the Fixed Timestep parameter in the time manager in order to reduce costs for the physical execution time updates. If you increase the value, you can increase the quality and accuracy of physics in your game or in your application, but you will lose the time to process. This can greatly reduce your productivity, or in other words, it can increase CPU overhead. The maximum allowed timestep indicates how much time will be spent in the worst case for physical treatment. The total processing time for physics depends on the awake rigidbodies and colliders in the scene, as well as the level of complexity of the colliders. Unity provides the ability to use physical materials for setting various properties such as friction and elasticity. For example, a piece of ice in your game may have very low friction or equal to zero (minimum value), while a jumping ball may have a very high friction force or equal to one (maximum value) and also very high elasticity. You should play with the settings of your physical materials for different objects and choose the most suitable solution for you and the best solution for your performance. Triggers do not require a lot of processing costs by the physics engine and can greatly help in improving your performance. Triggers are useful in situations where, for example, in your game you need to identify areas near all lights that are automatically turned on in the evening or night if the player is in its trigger zone or in other words within the geometric shape of its collider, which you can design as you wish. Unity triggers allow writing the three callbacks, which will be called when your object enters the trigger, while your object is staying in trigger, and when this object leaves the trigger. Thus, you can register any of these functions, the necessary instructions, for example, turn on the flashlight when entering the trigger zone or turn it off when exiting the trigger zone. It is important to know that in Unity, static objects (objects without a Rigidbody component) will not cause your callbacks to get into the zone trigger if your trigger does not contain a Rigidbody component; that is, in other words at least one of these objects must have a Rigidbody component in order to not ignore your callbacks. In the case of two triggers, there should be at least one object attached with a Rigidbody component to your callbacks were not ignored. Remember that when two objects are attached with Rigidbody and Collider components and if at least one of them is the trigger, then the trigger callbacks will be called and not the collision callbacks. I would also like to point out that your callbacks will be called for each object included in the collision or trigger zone. Also, you can directly control whether your collider is a trigger or not by setting the flag isTrigger value to true or false in your code. Of course, you can mix both options in order to obtain the best performance. All collision callbacks will be called only if at least one of two interacted rigidbodies is not kinematic. Summary This article covered new Mecanim animation features in Unity 5. You were introduced to the new awesome audio features in Unity 5. We also covered many useful details for your performance within Unity built-in physics and particle systems. Resources for Article: Further resources on this subject: Speeding up Gradle builds for Android [article] Saying Hello to Unity and Android [article] Learning NGUI for Unity [article]
Read more
  • 0
  • 0
  • 6954

article-image-blueprinting-your-infrastructure
Packt
05 Aug 2015
22 min read
Save for later

Blueprinting Your Infrastructure

Packt
05 Aug 2015
22 min read
In this article written by Gourav Shah, author of the book Ansible Playbook Essentials, we will learn about the following topics: The anatomy of a playbook What plays are and how to write a Hosts inventory and search patterns Ansible modules and the batteries-included approach Orchestrating infrastructure with Ansible Ansible as an orchestrator (For more resources related to this topic, see here.) Getting introduced to Ansible Ansible is a simple, flexible, and extremely powerful tool that gives you the ability to automate common infrastructure tasks, run ad hoc commands, and deploy multitier applications spanning multiple machines. Even though you can use Ansible to launch commands on a number of hosts in parallel, the real power lies in managing those using playbooks. As systems engineer, infrastructure that we typically need to automate contains complex multitier applications. Each of which represents a class of servers, for example, load balancers, web servers, database servers, caching applications, and middleware queues. Since many of these applications have to work in tandem to provide a service, there is topology involved as well. For example, a load balancer would connect to web servers, which in turn read/write to a database and connect to the caching server to fetch in-memory objects. Most of the time, when we launch such application stacks, we need to configure these components in a very specific order. Here is an example of a very common three-tier web application running a load balancer, a web server, and a database backend:   Ansible lets you translate this diagram into a blueprint, which defines your infrastructure policies. The format used to specify such policies is what playbooks are. Example policies and the sequence in which those are to be applied is shown in the following steps: Install, configure, and start the MySQL service on the database servers. Install and configure the web servers that run Nginx with PHP bindings. Deploy a Wordpress application on the web servers and add respective configurations to Nginx. Start the Nginx service on all web servers after deploying Wordpress. Finally, install, configure, and start the haproxy service on the load balancer hosts. Update haproxy configurations with the hostnames of all the web servers created earlier. The following is a sample playbook that translates the infrastructure blueprint into policies enforceable by Ansible:   Plays A playbook consists of one or more plays, which map groups of hosts to well-defined tasks. The preceding example contains three plays, each to configure one layer in the multitiered web application. Plays also define the order in which tasks are configured. This allows us to orchestrate multitier deployments. For example, configure the load balancers only after starting the web servers, or perform two-phase deployment where the first phase only adds this configurations and the second phase starts the services in the desired order. YAML – the playbook language As you may have already noticed, the playbook that we wrote previously resembles more of a text configuration than a code snippet. This is because the creators of Ansible chose to use a simple, human-readable, and familiar YAML format to blueprint the infrastructure. This adds to Ansible's appeal, as users of this tool need not learn any special programming language to get started with. Ansible code is self-explanatory and self-documenting in nature. A quick crash course on YAML should suffice to understand the basic syntax. Here is what you need to know about YAML to get started with your first playbook: The first line of a playbook should begin with "--- " (three hyphens) which indicates the beginning of the YAML document. Lists in YAML are represented with a hyphen followed by a white space. A playbook contains a list of plays; they are represented with "- ". Each play is an associative array, a dictionary, or a map in terms of key-value pairs. Indentations are important. All members of a list should be at the same indentation level. Each play can contain key-value pairs separated by ":" to denote hosts, variables, roles, tasks, and so on. Our first playbook Equipped with the basic rules explained previously and assuming readers have done a quick dive into YAML fundamentals, we will now begin writing our first playbook. Our problem statement includes the following: Create a devops user on all hosts. This user should be part of the devops group. Install the "htop" utility. Htop is an improved version of top—an interactive system process monitor. Add the Nginx repository to the web servers and start it as a service. Now, we will create our first playbook and save it as simple_playbook.yml containing the following code: --- - hosts: all remote_user: vagrant sudo: yes tasks: - group:      name: devops      state: present - name: create devops user with admin privileges       user:      name: devops      comment: "Devops User"      uid: 2001      group: devops - name: install htop package    action: apt name=htop state=present update_cache=yes   - hosts: www user: vagrant sudo: yes tasks: - name: add official nginx repository    apt_repository:      repo: 'deb http://nginx.org/packages/ubuntu/ lucid nginx' - name: install nginx web server and ensure its at the latest     version    apt:      name: nginx      state: latest - name: start nginx service    service:      name: nginx      state: started Our playbook contains two plays. Each play consists of the following two important parts: What to configure: We need to configure a host or group of hosts to run the play against. Also, we need to include useful connection information, such as which user to connect as, whether to use sudo command, and so on. What to run: This includes the specification of tasks to be run, including which system components to modify and which state they should be in, for example, installed, started, or latest. This could be represented with tasks and later on, by roles. Let's now look at each of these briefly. Creating a host inventory Before we even start writing our playbook with Ansible, we need to define an inventory of all hosts that need to be configured, and make it available for Ansible to use. Later, we will start running plays against a selection of hosts from this inventory. If you have an existing inventory, such as cobbler, LDAP, a CMDB software, or wish to pull it from a cloud provider, such as ec2, it can be pulled from Ansible using the concept of a dynamic inventory. For text-based local inventory, the default location is /etc/ansible/hosts. For our learning environment, however, we will create a custom inventory file customhosts in our working directory, the contents of which are shown as follows. You are free to create your own inventory file: #customhosts #inventory configs for my cluster [db] 192.168.61.11 ansible_ssh_user=vagrant   [www] www-01.example.com ansible_ssh_user=ubuntu www-02 ansible_ssh_user=ubuntu   [lb] lb0.example.com Now, when our playbook maps a play to the group, the www (hosts: www), hosts in that group will be configured. The all keywords will match to all hosts from the inventory. The following are the guidelines to for creating inventory files: Inventory files follow INI style configurations, which essentially include configuration blocks that start with host group/class names included in "[ ]". This allows the selective execution on classes of systems, for example, [namenodes]. A single host can be part of multiple groups. In such cases, host variables from both the groups will get merged, and the precedence rules apply. We will discuss variables and precedence in detail later. Each group contains a list of hosts and connection details, such as the SSH user to connect as, the SSH port number if non-default, SSH credentials/keys, sudo credentials, and so on. Hostnames can also contain globs, ranges, and more, to make it easy to include multiple hosts of the same type, which follow some naming patterns. After creating an inventory of the hosts, it's a good idea to validate connectivity using Ansible's ping module (for example, ansible -m ping all). Patterns In the preceding playbook, the following lines decide which hosts to select to run a specific play: - hosts: all - hosts: www The first code will match all hosts, and the second code will match hosts which are part of the www group. Patterns can be any of the following or their combinations: Pattern Types Examples Group name namenodes Match all all or * Range namenode[0:100] Hostnames/hostname globs *.example.com, host01.example.com Exclusions namenodes:!secondaynamenodes Intersection namenodes:&zookeeper Regular expressions ~(nn|zk).*.example.org   Tasks Plays map hosts to tasks. Tasks are a sequence of actions performed against a group of hosts that match the pattern specified in a play. Each play typically contains multiple tasks that are run serially on each machine that matches the pattern. For example, take a look at the following code snippet: - group:    name:devops    state: present - name: create devops user with admin privileges user:    name: devops    comment: "Devops User"    uid: 2001    group: devops In the preceding example, we have two tasks. The first one is to create a group, and second is to create a user and add it to the group created earlier. If you notice, there is an additional line in the second task, which starts with name:. While writing tasks, it's good to provide a name with a human-readable description of what this task is going to achieve. If not, the action string will be printed instead. Each action in a task list can be declared by specifying the following: The name of the module Optionally, the state of the system component being managed The optional parameters With newer versions of Ansible (0.8 onwards), writing an action keyword is now optional. We can directly provide the name of the module instead. So, both of these lines will have a similar action, that is,. installing a package with the apt module: action: apt name=htop state=present update_cache=yes apt: name=nginx state=latest Ansible stands out from other configuration management tools, with its batteries-included included approach. These batteries are "modules." It's important to understand what modules are before we proceed. Modules Modules are the encapsulated procedures that are responsible for managing specific system components on specific platforms. Consider the following example: The apt module for Debian and the yum module for RedHat helps manage system packages The user module is responsible for adding, removing, or modifying users on the system The service module will start/stop system services Modules abstract the actual implementation from users. They expose a declarative syntax that accepts a list of the parameters and states of the system components being managed. All this can be declared using the human-readable YAML syntax, using key-value pairs. In terms of functionality, modules resemble providers for those of you who are familiar with Chef/Puppet software. Instead of writing procedures to create a user, with Ansible we declare which state our component should be in, that is, which user to create, its state, and its characteristics, such as UID, group, shell, and so on. The actual procedures are inherently known to Ansible via modules, and are executed in the background. The Command and Shell modules are special ones. They neither take key-value pairs as parameters, nor are idempotent. Ansible comes preinstalled with a library of modules, which ranges from the ones which manage basic system resources to more sophisticated ones that send notifications, perform cloud integrations, and so on. If you want to provision an ec2 instance, create a database on the remote PostgreSQL server, and get notifications on IRC, then Ansible has a module for it. Isn't this amazing? No need to worry about finding an external plugin, or struggle to integrate with cloud providers, and so on. To find a list of modules available, you can refer to the Ansible documentation at http://docs.ansible.com/list_of_all_modules.html. Ansible is extendible too. If you do not find a module that does the job for you, it's easy to write one, and it doesn't have to be in Python. A module can be written for Ansible in the language of your choice. This is discussed in detail at http://docs.ansible.com/developing_modules.html. The modules and idempotence Idempotence is an important characteristic of a module. It is something which can be applied on your system multiple times, and will return deterministic results. It has built-in intelligence. For instance, we have a task that uses the apt module to install Nginx and ensure that it's up to date. Here is what happens if you run it multiple times: Every time idempotance is run multiple times, the apt module will compare what has been declared in the playbook versus the current state of that package on the system. The first time it runs, Ansible will determine that Nginx is not installed, and will go ahead with the installation. For every consequent run, it will skip the installation part, unless there is a new version of the package available in the upstream repositories. This allows executing the same task multiple times without resulting in the error state. Most of the Ansible modules are idempotent, except for the command and shell modules. Users will have to make these modules idempotent. Running the playbook Ansible comes with the ansible-playbook command to launch a playbook with. Let's now run the plays we created: $ ansible-playbook simple_playbook.yml -i customhosts Here is what happens when you run the preceding command: The ansible-playbook parameter is the command that takes the playbook as an argument (simple_playbook.yml) and runs the plays against the hosts The simple_playbook parameter contains the two plays that we created: one for common tasks, and the other for installing Nginx The customhosts parameter is our host's inventory, which lets Ansible know which hosts, or groups of hosts, to call plays against Launching the preceding command will start calling plays, orchestrating in the sequence that we described in the playbook. Here is the output of the preceding command:   Let's now analyze what happened: Ansible reads the playbooks specified as an argument to the ansible-playbook command and starts executing plays in the serial order. The first play that we declared, runs against the "all" hosts. The all keyword is a special pattern that will match all hosts (similar to *). So, the tasks in the first play will be executed on all hosts in the inventory we passed as an argument. Before running any of the tasks, Ansible will gather information about the systems that it is going to configure. This information is collected in the form of facts. The first play includes the creation of the devops group and user, and installation of the htop package. Since we have three hosts in our inventory, we see one line per host being printed, which indicates whether there was a change in the state of the entity being managed. If the state was not changed, "ok" will be printed. Ansible then moves to the next play. This is executed only on one host, as we have specifed "hosts:www" in our play, and our inventory contains a single host in the group "www". During the second play, the Nginx repository is added, the package is installed, and the service is started. Finally, Ansible prints the summary of the playbook run in the "PLAY RECAP" section. It indicates how many modifications were made, if any of the hosts were unreachable, or execution failed on any of the systems. What if a host is unresponsive, or fails to run tasks? Ansible has built-in intelligence, which will identify such issues and take the failed host out of rotation. It will not affect the execution on other hosts. Orchestrating Infrastructure with Ansible Orchestration can mean different things at different times when used in different scenarios. The following are some of the orchestration scenarios described: Running ad hoc commands in parallel on a group of hosts, for example, using a for loop to walk over a group of web servers to restart the Apache service. This is the crudest form of orchestration. Invoking an orchestration engine to launch another configuration management tool to enforce correct ordering. Configuring a multitier application infrastructure in a certain order with the ability to have fine-grained control over each step, and the flexibility to move back and forth while configuring multiple components. For example, installing the database, setting up the web server, coming back to the database, creating a schema, going to web servers to start services, and more. Most real-world scenarios are similar to the last scenario, which involve a multitier application stacks and more than one environment, where it's important to bring up and update nodes in a certain order, and in a coordinated way. It's also useful to actually test that the application is up and running before moving on to the next. The workflow to set up the stack for the first time versus pushing updates can be different. There can be times when you would not want to update all the servers at once, but do them in batches so that downtime is avoided. Ansible as an orchestrator When it comes to orchestration of any sort, Ansible really shines over other tools. Of course, as the creators of Ansible would say, it's more than a configuration management tool, which is true. Ansible can find a place for itself in any of the orchestration scenarios discussed earlier. It was designed to manage complex multitier deployments. Even if you have your infrastructure being automated with other configuration management tools, you can consider Ansible to orchestrate those. Let's discuss the specific features that Ansible ships with, which are useful for orchestration. Multiple playbooks and ordering Unlike most other configuration management systems, Ansible supports running different playbooks at different times to configure or manage the same infrastructure. You can create one playbook to set up the application stack for the first time, and another to push updates over time in a certain manner. Another property of the playbook is that it can contain more than one play, which allows the separation of groups of hosts for each tier in the application stack, and configures them at the same time. Pre-tasks and post-tasks We have used pre-tasks and post-tasks earlier, which are very relevant while orchestrating, as these allow us to execute a task or run validations before and after running a play. Let's use the example of updating web servers that are registered with the load balancer. Using pre-tasks, a web server can be taken out of a load balancer, then the role is applied to the web servers to push updates, followed by post-tasks which register the web server back to the load balancer. Moreover, if these servers are being monitored by Nagios, alerts can be disabled during the update process and automatically enabled again using pre-tasks and post-tasks. This can avoid the noise that the monitoring tool may generate in the form of alerts. Delegation If you would like tasks to be selectively run on a certain class of hosts, especially the ones outside the current play, the delegation feature of Ansible can come in handy. This is relevant to the scenarios discussed previously and is commonly used with pre-tasks and post-tasks. For example, before updating a web server, it needs to be deregistered from the load balancer. Now, this task should be run on the load balancer, which is not part of the play. This dilemma can be solved by using the delegation feature. With pre-tasks, a script can be launched on the load balancer using the delegate_to keyword, which does the deregistering part as follows: - name: deregister web server from lb shell: < script to run on lb host > delegate_to: lbIf there areis more than one load balancers, anan inventory group can be iterated over as, follows: - name: deregister web server from lb shell: < script to run on lb host > delegate_to: "{{ item }}" with_items: groups.lb Rolling updates This is also called batch updates or zero-downtime updates. Let's assume that we have 100 web servers that need to be updated. If we define these in an inventory and launch a playbook against them, Ansible will start updating all the hosts in parallel. This can also cause downtime. To avoid complete downtime and have a seamless update, it would make sense to update them in batches, for example, 20 at a time. While running a playbook, batch size can be mentioned by using the serial keyword in the play. Let's take a look at the following code snippet: - hosts: www remote_user: vagrant sudo: yes serial: 20 Tests While orchestrating, it's not only essential to configure the applications in order, but also to ensure that they are actually started, and functioning as expected. Ansible modules, such as wait_for and uri, help you build that testing into the playbooks, for example: - name: wait for mysql to be up wait_for: host=db.example.org port=3106 state=started - name: check if a uri returns content uri: url=http://{{ inventory_hostname }}/api register: apicheck The wait_for module can be additionally used to test the existence of a file. It's also useful when you would like to wait until a service is available before proceeding. Tags Ansible plays map roles to specific hosts. While the plays are run, the entire logic that is called from the main task is executed. While orchestrating, we may need to just run a part of the tasks based on the phases that we want to bring the infrastructure in. One example is a zookeeper cluster, where it's important to bring up all the nodes in the cluster at the same time, or in a gap of a few seconds. Ansible can orchestrate this easily with a two-phase execution. In the first phase, you can install and configure the application on all nodes, but not start it. The second phase involves starting the application on all nodes almost simultaneously. This can be achieved by tagging individual tasks, for example, configure, install, service, and more. For example, let's take a look at the following screenshot:   While running a playbook, all tasks with a specific tag can be called using –-tags as follows: $ Ansible-playbook -i customhosts site.yml –-tags install Tags can not only be applied to tasks, but also to the roles, as follows: { role: nginx, when: Ansible_os_family == 'Debian', tags: 'www' } If a specific task needs to be executed always, even if filtered with a tag, use a special tag called always. This will make the task execute unless an overriding option, such as --skip-tags always is used. Patterns and limits Limits can be used to run tasks on a subset of hosts, which are filtered by patterns. For example, the following code would run tasks only on hosts that are part of the db group: $ Ansible-playbook -i customhosts site.yml --limit db Patterns usually contain a group of hosts to include or exclude. A combination of more than one pattern can be specified as follows: $ Ansible-playbook -i customhosts site.yml --limit db,lb Having a colon as separator can be used to filter hosts further. The following command would run tasks on all hosts except for the ones that belong to the groups www and db: $ Ansible-playbook -i customhosts site.yml --limit 'all:!www:!db' Note that this usually needs to be enclosed in quotes. In this pattern, we used the all group, which matches all hosts in the inventory, and can be replaced with *. That was followed by ! to exclude hosts in the db group. The output of this command is as follows, which shows that plays by the name db and www were skipped as no hosts matched due to the filter we used previously: Let's now see these orchestration features in action. We will begin by tagging the role and do the multiphase execution followed by writing a new playbook to manage updates to the WordPress application. Review questions Do you think you've understood the article well enough? Try answering the following questions to test your understanding: What is idempotence when it comes to modules? What is the host's inventory and why is it required? Playbooks map ___ to ___ (fill in the blanks) What types of patterns can you use while selecting a list of hosts to run plays against? Where is the actual procedure to execute an action on a specific platform defined? Why is it said that Ansible comes with batteries included? Summary In this article, you learned about what Ansible playbooks are, what components those are made up of, and how to blueprint your infrastructure with it. We also did a primer on YAML—the language used to create plays. You learned about how plays map tasks to hosts, how to create a host inventory, how to filter hosts with patterns, and how to use modules to perform actions on our systems. We then created a simple playbook as a proof of concept. We also learned about orchestration, using Ansible as an orchestrator and different tasks that we can perform using Ansible as an orchestrator. Resources for Article: Further resources on this subject: Advanced Playbooks [article] Ansible – An Introduction [article] Getting Started with Ansible [article]
Read more
  • 0
  • 0
  • 2441

article-image-sentiment-analysis-twitter-data-part-2
Janu Verma
03 Aug 2015
6 min read
Save for later

Sentiment Analysis of Twitter data, Part 2

Janu Verma
03 Aug 2015
6 min read
Sentiment Analysis aims to determine how a certain person or group reacts to a specific topic. Traditionally, we would run surveys to gather data and do statistical analysis. With Twitter, it works by extracting tweets containing references to the desired topic, computing the sentiment polarity and strength of each tweet, and then aggregating the results for all such tweets. Companies use this to gather public opinion on their products and services, and make data-informed decisions. In Part 1 we explored what sentiment analysis is and why it is effective. We also looked at the two main methods used: lexical and machine learning. Now in this Part 2 post we will examine some actual examples of using sentiment analysis. Let’s start by examining the AFINN model. AFINN Model: In the AFINN model, the authors have computed sentiment scores for a list of words relevant to microblogging. The sentiment of a tweet is computed based on the sentiment scores of the terms in the tweet. The sentiment of the tweet is defined to be equal to the sum of the sentiment scores for each term in the tweet. The AFINN-111 dictionary contains 2477 English words rated for valence with an integer value between -5 and 5. The words have been manually labelled by Finn Arup Neilsen in 2009-2010. It has lots of words/phrases from Internet lingo such as ‘wtf’, ‘wow’, ‘’wowow’, ‘lol’, ‘lmao’, dipshit’ etc. Think of the AFINN list as more Urban dictionary, than Oxford dictionary.   Some of the words are the grammatically different versions of the same stem, such as ‘favorite’ and ‘favorites’ are listed as two different words with different valence scores. In the definition of sentiment of a tweet, the words in the tweet that are not in the AFINN list are assumed to have zero sentiment score. Implementations of the AFINN model can be found here. Naive Bayes Classifier The Naive Bayes classifier can be trained on a corpus of labeled (positive and negative) tweets and then employed to assign polarity to a new tweet. The features used in this model are the words with their frequencies in the tweet strings. You may want to keep or remove URLs, emoticons and short tokens, depending on the application. This classifier essentially computes the probability of a tweet being positive or negative. One first computes the probability of a word to be present in a positive or negative tweet. This can be easily computed from the training data. Prob(word in +ve tweet) = frequency of occurrence of this word in positive tweets, for instance, what fraction of all tweets containing this word are positive. Negative tweets are similar. A tweet contains many words. The probability of a set of words to be in a positive tweet is defined as the product of the probabilities for each word. This is the Naive (=independence) assumption. In general this computation is not very easy. Using the pre-estimated values of these probabilities, you can compute the probability of a tweet to be positive or negative using Bayes theorem. Whenever a new tweet is fed to the classifier, it will predict the polarity of the tweet based on the probability of its having that polarity. An implementation of Naive Bayes classifier for classifying spam and non-spam messages can be found here. The same script can be used for classifying positive and negative tweets. In most of the cases, we want to include a third category of neutral tweets, or tweets that have zero polarity with respect to the given topic. The methods described above were chosen for simplicity, several other methods in both the categories are prevalent today. Lots of companies using sentiment analysis employ lexical methods where they create dictionaries based on their trade algorithms and the domain of the application. For machine learning based analysis, instead of Naive Bayes, one can use more sophisticated algorithms like SVMs. Challenges Sentiment analysis is a very useful, but there are many challenges that need to be overcome to achieve good results. The very first step in opinion mining, something which I swept under the rug so far, is that we have to identify tweets that are relevant to our topic. Tweets containing the given word can be a decent choice, although not perfect. Once we have identified tweets to be analyzed, we need to sure that the tweets DO contain sentiment. Neutral tweets can be a part of our model, but only polarized tweets tell us something subjective. Even though the tweets are polarized, we still need to make sure that the sentiment in the tweet is related to the topic we are studying. For example, suppose we are studying sentiment related to a movie Mission Impossible, then the tweet: “Tom Cruise in Mission Impossible is pathetic!”. Now this tweet has a negative sentiment, but is directed at the actor rather than the movie. This is not a great example, as the sentiment of the actor and movie is related. The main challenge in Sentiment analysis using lexical methods is to build a dictionary that contains words/phrases and their sentiment scores. It is very hard to do so in full generality, and often the best idea is to choose a subject and build a list for that. Thus sentiment analysis is highly domain centric, so the techniques developed for stocks may not work for movies. To solve these problems, you need expertise in NLP and computational linguistics. They correspond to entity extraction, NER, and entity pattern extraction in NLP terminology. Beyond Twitter Facebook performed an experiment to measure the effect of removing positive (or negative) posts from the people's news feeds on how positive (or negative) their own posts were in the days after these changes were made. They found that the people from whose news feeds negative posts were removed produced a larger percentage of positive words as well as a smaller percentage of negative words in their posts. The group of people from whose news feeds negative posts were removed showed similar tendencies. The procedure and results of this experiment were a paper in the Proceedings of the National Academy of Sciences. Though, I don’t subscribe to the idea of using users are subjects to a physiological experiment without their knowledge, this is a cool application of sentiment analysis subject area. About the Author Janu Verma is a Quantitative Researcher at the Buckler Lab, Cornell University, where he works on problems in bioinformatics and genomics. His background is in mathematics and machine learning and he leverages tools from these areas to answer questions in biology. Janu hold a Masters in Theoretical Physics from University of Cambridge in UK, and dropped out from mathematics PhD program (after 3 years) at Kansas State University. Until 24th January save 50% on our hottest Machine Learning titles, as we celebrate Machine Learning week. From Python to Spark to R, we've got a range of languages and tools covered so you can get to grips with Machine Learning from a range of perspectives. As well as savings on popular titles, we're also giving away a free Machine Learning eBook every day this week - visit our Free Learning page to get yours!
Read more
  • 0
  • 0
  • 6279

article-image-directives-and-services-ionic
Packt
28 Jul 2015
18 min read
Save for later

Directives and Services of Ionic

Packt
28 Jul 2015
18 min read
In this article by Arvind Ravulavaru, author of Learning Ionic, we are going to take a look at Ionic directives and services, which provides reusable components and functionality that can help us in developing applications even faster. (For more resources related to this topic, see here.) Ionic Platform service The first service we are going to deal with is the Ionic Platform service ($ionicPlatform). This service provides device-level hooks that you can tap into to better control your application behavior. We will start off with the very basic ready method. This method is fired once the device is ready or immediately, if the device is already ready. All the Cordova-related code needs to be written inside the $ionicPlatform.ready method, as this is the point in the app life cycle where all the plugins are initialized and ready to be used. To try out Ionic Platform services, we will be scaffolding a blank app and then working with the services. Before we scaffold the blank app, we will create a folder named chapter5. Inside that folder, we will run the following command: ionic start -a "Example 16" -i app.example.sixteen example16 blank Once the app is scaffolded, if you open www/js/app.js, you should find a section such as: .run(function($ionicPlatform) {   $ionicPlatform.ready(function() {     // Hide the accessory bar by default (remove this to show the accessory bar above the keyboard     // for form inputs)     if(window.cordova && window.cordova.plugins.Keyboard) {       cordova.plugins.Keyboard.hideKeyboardAccessoryBar(true);     }     if(window.StatusBar) {       StatusBar.styleDefault();     }   }); }) You can see that the $ionicPlatform service is injected as a dependency to the run method. It is highly recommended to use $ionicPlatform.ready method inside other AngularJS components such as controllers and directives, where you are planning to interact with Cordova plugins. In the preceding run method, note that we are hiding the keyboard accessory bar by setting: cordova.plugins.Keyboard.hideKeyboardAccessoryBar(true); You can override this by setting the value to false. Also, do notice the if condition before the statement. It is always better to check for variables related to Cordova before using them. The $ionicPlatform service comes with a handy method to detect the hardware back button event. A few (Android) devices have a hardware back button and, if you want to listen to the back button pressed event, you will need to hook into the onHardwareBackButton method on the $ionicPlatform service: var hardwareBackButtonHandler = function() {   console.log('Hardware back button pressed');   // do more interesting things here }         $ionicPlatform.onHardwareBackButton(hardwareBackButtonHandler); This event needs to be registered inside the $ionicPlatform.ready method preferably inside AngularJS's run method. The hardwareBackButtonHandler callback will be called whenever the user presses the device back button. A simple functionally that you can do with this handler is to ask the user if they want to really quit your app, making sure that they have not accidently hit the back button. Sometimes this may be annoying. Thus, you can provide a setting in your app whereby the user selects if he/she wants to be alerted when they try to quit. Based on that, you can either defer registering the event or you can unsubscribe to it. The code for the preceding logic will look something like this: .run(function($ionicPlatform) {     $ionicPlatform.ready(function() {         var alertOnBackPress = localStorage.getItem('alertOnBackPress');           var hardwareBackButtonHandler = function() {             console.log('Hardware back button pressed');             // do more interesting things here         }           function manageBackPressEvent(alertOnBackPress) {             if (alertOnBackPress) {                 $ionicPlatform.onHardwareBackButton(hardwareBackButtonHandler);             } else {                 $ionicPlatform.offHardwareBackButton(hardwareBackButtonHandler);             }         }           // when the app boots up         manageBackPressEvent(alertOnBackPress);           // later in the code/controller when you let         // the user update the setting         function updateSettings(alertOnBackPressModified) {             localStorage.setItem('alertOnBackPress', alertOnBackPressModified);             manageBackPressEvent(alertOnBackPressModified)         }       }); }) In the preceding code snippet, we are looking in localStorage for the value of alertOnBackPress. Next, we create a handler named hardwareBackButtonHandler, which will be triggered when the back button is pressed. Finally, a utility method named manageBackPressEvent() takes in a Boolean value that decides whether to register or de-register the callback for HardwareBackButton. With this set up, when the app starts we call the manageBackPressEvent method with the value from localStorage. If the value is present and is equal to true, we register the event; otherwise, we do not. Later on, we can have a settings controller that lets users change this setting. When the user changes the state of alertOnBackPress, we call the updateSettings method passing in if the user wants to be alerted or not. The updateSettings method updates localStorage with this setting and calls the manageBackPressEvent method, which takes care of registering or de-registering the callback for the hardware back pressed event. This is one powerful example that showcases the power of AngularJS when combined with Cordova to provide APIs to manage your application easily. This example may seem a bit complex at first, but most of the services that you are going to consume will be quite similar. There will be events that you need to register and de-register conditionally, based on preferences. So, I thought this would be a good place to share an example such as this, assuming that this concept will grow on you. registerBackButtonAction The $ionicPlatform also provides a method named registerBackButtonAction. This is another API that lets you control the way your application behaves when the back button is pressed. By default, pressing the back button executes one task. For example, if you have a multi-page application and you are navigating from page one to page two and then you press the back button, you will be taken back to page one. In another scenario, when a user navigates from page one to page two and page two displays a pop-up dialog when it loads, pressing the back button here will only hide the pop-up dialog but will not navigate to page one. The registerBackButtonAction method provides a hook to override this behavior. The registerBackButtonAction method takes the following three arguments: callback: This is the method to be called when the event is fired priority: This is the number that indicates the priority of the listener actionId (optional): This is the ID assigned to the action By default the priority is as follows: Previous view = 100 Close side menu = 150 Dismiss modal = 200 Close action sheet = 300 Dismiss popup = 400 Dismiss loading overlay = 500 So, if you want a certain functionality/custom code to override the default behavior of the back button, you will be writing something like this: var cancelRegisterBackButtonAction = $ionicPlatform.registerBackButtonAction(backButtonCustomHandler, 201); This listener will override (take precedence over) all the default listeners below the priority value of 201—that is dismiss modal, close side menu, and previous view but not above the priority value. When the $ionicPlatform.registerBackButtonAction method executes, it returns a function. We have assigned that function to the cancelRegisterBackButtonAction variable. Executing cancelRegisterBackButtonAction de-registers the registerBackButtonAction listener. The on method Apart from the preceding handy methods, $ionicPlatform has a generic on method that can be used to listen to all of Cordova's events (https://cordova.apache.org/docs/en/edge/cordova_events_events.md.html). You can set up hooks for application pause, application resume, volumedownbutton, volumeupbutton, and so on, and execute a custom functionality accordingly. You can set up these listeners inside the $ionicPlatform.ready method as follows: var cancelPause = $ionicPlatform.on('pause', function() {             console.log('App is sent to background');             // do stuff to save power         });   var cancelResume = $ionicPlatform.on('resume', function() {             console.log('App is retrieved from background');             // re-init the app         });           // Supported only in BlackBerry 10 & Android var cancelVolumeUpButton = $ionicPlatform.on('volumeupbutton', function() {             console.log('Volume up button pressed');             // moving a slider up         });   var cancelVolumeDownButton = $ionicPlatform.on('volumedownbutton', function() {             console.log('Volume down button pressed');             // moving a slider down         }); The on method returns a function that, when executed, de-registers the event. Now you know how to control your app better when dealing with mobile OS events and hardware keys. Content Next, we will take a look at content-related directives. The first is the ion-content directive. Navigation The next component we are going to take a look at is the navigation component. The navigation component has a bunch of directives as well as a couple of services. The first directive we are going to take a look at is ion-nav-view. When the app boots up, $stateProvider will look for the default state and then will try to load the corresponding template inside the ion-nav-view. Tabs and side menu To understand navigation a bit better, we will explore the tabs directive and the side menu directive. We will scaffold the tabs template and go through the directives related to tabs; run this: ionic start -a "Example 19" -i app.example.nineteen example19 tabs Using the cd command, go to the example19 folder and run this:   ionic serve This will launch the tabs app. If you open www/index.html file, you will notice that this template uses ion-nav-bar to manage the header with ion-nav-back-button inside it. Next open www/js/app.js and you will find the application states configured: .state('tab.dash', {     url: '/dash',     views: {       'tab-dash': {         templateUrl: 'templates/tab-dash.html',         controller: 'DashCtrl'       }     }   }) Do notice that the views object has a named object: tab-dash. This will be used when we work with the tabs directive. This name will be used to load a given view when a tab is selected into the ion-nav-view directive with the name tab-dash. If you open www/templates/tabs.html, you will find a markup for the tabs component: <ion-tabs class="tabs-icon-top tabs-color-active-positive">     <!-- Dashboard Tab -->   <ion-tab title="Status" icon-off="ion-ios-pulse" icon-on="ion- ios-pulse-strong" href="#/tab/dash">     <ion-nav-view name="tab-dash"></ion-nav-view>   </ion-tab>     <!-- Chats Tab -->   <ion-tab title="Chats" icon-off="ion-ios-chatboxes-outline" icon-on="ion-ios-chatboxes" href="#/tab/chats">     <ion-nav-view name="tab-chats"></ion-nav-view>   </ion-tab>     <!-- Account Tab -->   <ion-tab title="Account" icon-off="ion-ios-gear-outline" icon- on="ion-ios-gear" href="#/tab/account">     <ion-nav-view name="tab-account"></ion-nav-view>   </ion-tab>   </ion-tabs> The tabs.html will be loaded before any of the child tabs load, since tab state is defined as an abstract route. The ion-tab directive is nested inside ion-tabs and every ion-tab directive has an ion-nav-view directive nested inside it. When a tab is selected, the route with the same name as the name attribute on the ion-nav-view will be loaded inside the corresponding tab. Very neatly structured! You can read more about tabs directive and its services at http://ionicframework.com/docs/nightly/api/directive/ionTabs/. Next, we are going to scaffold an app using the side menu template and go through the navigation inside it; run this: ionic start -a "Example 20" -i app.example.twenty example20 sidemenu Using the cd command, go to the example20 folder and run this:   ionic serve This will launch the side menu app. We start off exploring with www/index.html. This file has only the ion-nav-view directive inside the body. Next, we open www/js/app/js. Here, the routes are defined as expected. But one thing to notice is the name of the views for search, browse, and playlists. It is the same—menuContent—for all: .state('app.search', {     url: "/search",     views: {       'menuContent': {         templateUrl: "templates/search.html"       }     } }) If we open www/templates/menu.html, you will notice ion-side-menus directive. It has two children ion-side-menu-content and ion-side-menu. The ion-side-menu-content displays the content for each menu item inside the ion-nav-view named menuContent. This is why all the menu items in the state router have the same view name. The ion-side-menu is displayed on the left-hand side of the page. You can set the location on the ion-side-menu to the right to show the side menu on the right or you can have two side menus. Do notice the menu-toggle directive on the button inside ion-nav-buttons. This directive is used to toggle the side menu. If you want to have the menu on both sides, your menu.html will look as follows: <ion-side-menus enable-menu-with-back-views="false">   <ion-side-menu-content>     <ion-nav-bar class="bar-stable">       <ion-nav-back-button>       </ion-nav-back-button>         <ion-nav-buttons side="left">         <button class="button button-icon button-clear ion- navicon" menu-toggle="left">         </button>       </ion-nav-buttons>       <ion-nav-buttons side="right">         <button class="button button-icon button-clear ion- navicon" menu-toggle="right">         </button>       </ion-nav-buttons>     </ion-nav-bar>     <ion-nav-view name="menuContent"></ion-nav-view>   </ion-side-menu-content>     <ion-side-menu side="left">     <ion-header-bar class="bar-stable">       <h1 class="title">Left</h1>     </ion-header-bar>     <ion-content>       <ion-list>         <ion-item menu-close ng-click="login()">           Login         </ion-item>         <ion-item menu-close href="#/app/search">           Search         </ion-item>         <ion-item menu-close href="#/app/browse">           Browse         </ion-item>         <ion-item menu-close href="#/app/playlists">           Playlists         </ion-item>       </ion-list>     </ion-content>   </ion-side-menu>   <ion-side-menu side="right">     <ion-header-bar class="bar-stable">       <h1 class="title">Right</h1>     </ion-header-bar>     <ion-content>       <ion-list>         <ion-item menu-close ng-click="login()">           Login         </ion-item>         <ion-item menu-close href="#/app/search">           Search         </ion-item>         <ion-item menu-close href="#/app/browse">           Browse         </ion-item>         <ion-item menu-close href="#/app/playlists">           Playlists         </ion-item>       </ion-list>     </ion-content>   </ion-side-menu> </ion-side-menus> You can read more about side menu directive and its services at http://ionicframework.com/docs/nightly/api/directive/ionSideMenus/. This concludes our journey through the navigation directives and services. Next, we will move to Ionic loading. Ionic loading The first service we are going to take a look at is $ionicLoading. This service is highly useful when you want to block a user's interaction from the main page and indicate to the user that there is some activity going on in the background. To test this, we will scaffold a new blank template and implement $ionicLoading; run this: ionic start -a "Example 21" -i app.example.twentyone example21 blank Using the cd command, go to the example21 folder and run this:   ionic serve This will launch the blank template in the browser. We will create an app controller and define the show and hide methods inside it. Open www/js/app.js and add the following code: .controller('AppCtrl', function($scope, $ionicLoading, $timeout) {       $scope.showLoadingOverlay = function() {         $ionicLoading.show({             template: 'Loading...'         });     };     $scope.hideLoadingOverlay = function() {         $ionicLoading.hide();     };       $scope.toggleOverlay = function() {         $scope.showLoadingOverlay();           // wait for 3 seconds and hide the overlay         $timeout(function() {             $scope.hideLoadingOverlay();         }, 3000);     };   }) We have a function named showLoadingOverlay, which will call $ionicLoading.show(), and a function named hideLoadingOverlay(), which will call $ionicLoading.hide(). We have also created a utility function named toggleOverlay(), which will call showLoadingOverlay() and after 3 seconds will call hideLoadingOverlay(). We will update our www/index.html body section as follows: <body ng-app="starter" ng-controller="AppCtrl">     <ion-header-bar class="bar-stable">         <h1 class="title">$ionicLoading service</h1>     </ion-header-bar>     <ion-content class="padding">         <button class="button button-dark" ng-click="toggleOverlay()">             Toggle Overlay         </button>     </ion-content> </body> We have a button that calls toggleOverlay(). If you save all the files, head back to the browser, and click on the Toggle Overlay button, you will see the following screenshot: As you can see, the overlay is shown till the hide method is called on $ionicLoading. You can also move the preceding logic inside a service and reuse it across the app. The service will look like this: .service('Loading', function($ionicLoading, $timeout) {     this.show = function() {         $ionicLoading.show({             template: 'Loading...'         });     };     this.hide = function() {         $ionicLoading.hide();     };       this.toggle= function() {         var self  = this;         self.show();           // wait for 3 seconds and hide the overlay         $timeout(function() {             self.hide();         }, 3000);     };   }) Now, once you inject the Loading service into your controller or directive, you can use Loading.show(), Loading.hide(), or Loading.toggle(). If you would like to show only a spinner icon instead of text, you can call the $ionicLoading.show method without any options: $scope.showLoadingOverlay = function() {         $ionicLoading.show();     }; Then, you will see this: You can configure the show method further. More information is available at http://ionicframework.com/docs/nightly/api/service/$ionicLoading/. You can also use the $ionicBackdrop service to show just a backdrop. Read more about $ionicBackdrop at http://ionicframework.com/docs/nightly/api/service/$ionicBackdrop/. You can also checkout the $ionicModal service at http://ionicframework.com/docs/api/service/$ionicModal/; it is quite similar to the loading service. Popover and Popup services Popover is a contextual view that generally appears next to the selected item. This component is used to show contextual information or to show more information about a component. To test this service, we will be scaffolding a new blank app: ionic start -a "Example 23" -i app.example.twentythree example23 blank Using the cd command, go to the example23 folder and run this:   ionic serve This will launch the blank template in the browser. We will add a new controller to the blank project named AppCtrl. We will be adding our controller code in www/js/app.js. .controller('AppCtrl', function($scope, $ionicPopover) {       // init the popover     $ionicPopover.fromTemplateUrl('button-options.html', {         scope: $scope     }).then(function(popover) {         $scope.popover = popover;     });       $scope.openPopover = function($event, type) {         $scope.type = type;         $scope.popover.show($event);     };       $scope.closePopover = function() {         $scope.popover.hide();         // if you are navigating away from the page once         // an option is selected, make sure to call         // $scope.popover.remove();     };   }); We are using the $ionicPopover service and setting up a popover from a template named button-options.html. We are assigning the current controller scope as the scope to the popover. We have two methods on the controller scope that will show and hide the popover. The openPopover method receives two options. One is the event and second is the type of the button we are clicking (more on this in a moment). Next, we update our www/index.html body section as follows: <body ng-app="starter" ng-controller="AppCtrl">     <ion-header-bar class="bar-positive">         <h1 class="title">Popover Service</h1>     </ion-header-bar>     <ion-content class="padding">         <button class="button button-block button-dark" ng- click="openPopover($event, 'dark')">             Dark Button         </button>         <button class="button button-block button-assertive" ng- click="openPopover($event, 'assertive')">             Assertive Button         </button>         <button class="button button-block button-calm" ng- click="openPopover($event, 'calm')">             Calm Button         </button>     </ion-content>     <script id="button-options.html" type="text/ng-template">         <ion-popover-view>             <ion-header-bar>                 <h1 class="title">{{type}} options</h1>             </ion-header-bar>             <ion-content>                 <div class="list">                     <a href="#" class="item item-icon-left">                         <i class="icon ion-ionic"></i> Option One                     </a>                     <a href="#" class="item item-icon-left">                         <i class="icon ion-help-buoy"></i> Option Two                     </a>                     <a href="#" class="item item-icon-left">                         <i class="icon ion-hammer"></i> Option Three                     </a>                     <a href="#" class="item item-icon-left" ng- click="closePopover()">                         <i class="icon ion-close"></i> Close                     </a>                 </div>             </ion-content>         </ion-popover-view>     </script> </body> Inside ion-content, we have created three buttons, each themed with a different mood (dark, assertive, and calm). When a user clicks on the button, we show a popover that is specific for that button. For this example, all we are doing is passing in the name of the mood and showing the mood name as the heading in the popover. But you can definitely do more. Do notice that we have wrapped our template content inside ion-popover-view. This takes care of positioning the modal appropriately. The template must be wrapped inside the ion-popover-view for the popover to work correctly. When we save all the files and head back to the browser, we will see the three buttons. Depending on the button you click, the heading of the popover changes, but the options remain the same for all of them: Then, when we click anywhere on the page or the close option, the popover closes. If you are navigating away from the page when an option is selected, make sure to call: $scope.popover.remove(); You can read more about Popover at http://ionicframework.com/docs/api/controller/ionicPopover/. Our GitHub organization With the ever-changing frontend world, keeping up with latest in the business is quite essential. During the course of the book, Cordova, Ionic, and Ionic CLI has evolved a lot and we are predicting that they will keep evolving till they become stable. So, we have created a GitHub organization named Learning Ionic (https://github.com/learning-ionic), which consists of code for all the chapters. You can raise issues, submit pull requests and we will also try and keep it updated with the latest changes. So, you can always refer back to GitHub organization for the possible changes. Summary In this article, we looked at various Ionic directives and services that help us develop applications easily. Resources for Article: Further resources on this subject: Mailing with Spring Mail [article] Implementing Membership Roles, Permissions, and Features [article] Time Travelling with Spring [article]
Read more
  • 0
  • 0
  • 13291
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-simple-pathfinding-algorithm-maze
Packt
23 Jul 2015
10 min read
Save for later

A Simple Pathfinding Algorithm for a Maze

Packt
23 Jul 2015
10 min read
In this article by Mário Kašuba, author of the book Lua Game Development Cookbook, explains that maze pathfinding can be used effectively in many types of games, such as side-scrolling platform games or top-down, gauntlet-like games. The point is to find the shortest viable path from one point on the map to another. This can be used for moving NPCs and players as well. (For more resources related to this topic, see here.) Getting ready This article will use a simple maze environment to find a path starting at the start point and ending at the exit point. You can either prepare one by yourself or let the computer create one for you. A map will be represented by a 2D-map structure where each cell will consist of a cell type and cell connections. The cell type values are as follows: 0 means a wall 1 means an empty cell 2 means the start point 3 means the exit point Cell connections will use a bitmask value to get information about which cells are connected to the current cell. The following diagram contains cell connection bitmask values with their respective positions: Now, the quite common problem in programming is how to implement an efficient data structure for 2D maps. Usually, this is done either with a relatively large one-dimensional array or with an array of arrays. All these arrays have a specified static size, so map dimensions are fixed. The problem arises when you use a simple 1D array and you need to change the map size during gameplay or the map size should be unlimited. This is where map cell indexing comes into place. Often you can use this formula to compute the cell index from 2D map coordinates: local index = x + y * map_width map[index] = value There's nothing wrong with this approach when the map size is definite. However, changing the map size would invalidate the whole data structure as the map_width variable would change its value. A solution to this is to use indexing that's independent from the map size. This way you can ensure consistent access to all elements even if you resize the 2D map. You can use some kind of hashing algorithm that packs map cell coordinates into one value that can be used as a unique key. Another way to accomplish this is to use the Cantor pairing function, which is defined for two input coordinates:   Index value distribution is shown in the following diagram: The Cantor pairing function ensures that there are no key collisions no matter what coordinates you use. What's more, it can be trivially extended to support three or more input coordinates. To illustrate the usage of the Cantor pairing function for more dimensions, its primitive form will be defined as a function cantor(k1, k2), where k1 and k2 are input coordinates. The pairing function for three dimensions will look like this: local function cantor3D(k1, k2, k3) return cantor(cantor(k1, k2), k3) end Keep in mind that the Cantor pairing function always returns one integer value. With higher number of dimensions, you'll soon get very large values in the results. This may pose a problem because the Lua language can offer 52 bits for integer values. For example, for 2D coordinates (83114015, 11792250) you'll get a value 0x000FFFFFFFFFFFFF that still can fit into 52-bit integer values without rounding errors. The larger coordinates will return inaccurate values and subsequently you'd get key collisions. Value overflow can be avoided by dividing large maps into smaller ones, where each one uses the full address space that Lua numbers can offer. You can use another coordinate to identify submaps. This article will use specialized data structures for a 2D map with the Cantor pairing function for internal cell indexing. You can use the following code to prepare this type of data structure: function map2D(defaultValue) local t = {} -- Cantor pair function local function cantorPair(k1, k2)    return 0.5 * (k1 + k2) * ((k1 + k2) + 1) + k2 end setmetatable(t, {    __index = function(_, k)      if type(k)=="table" then        local i = rawget(t, cantorPair(k[1] or 1, k[2] or 1))        return i or defaultValue      end    end,    __newindex = function(_, k, v)      if type(k)=="table" then        rawset(t, cantorPair(k[1] or 1, k[2] or 1), v)      else        rawset(t, k, v)      end    end, }) return t end The maze generator as well as the pathfinding algorithm will need a stack data structure. How to do it… This section is divided into two parts, where each one solves very similar problems from the perspective of the maze generator and the maze solver. Maze generation You can either load a maze from a file or generate a random one. The following steps will show you how to generate a unique maze. First, you'll need to grab a maze generator library from the GitHub repository with the following command: git clone https://github.com/soulik/maze_generator This maze generator uses the depth-first approach with backtracking. You can use this maze generator in the following steps. First, you'll need to set up maze parameters such as maze size, entry, and exit points. local mazeGenerator = require 'maze' local maze = mazeGenerator { width = 50, height = 25, entry = {x = 2, y = 2}, exit = {x = 30, y = 4}, finishOnExit = false, } The final step is to iteratively generate the maze map until it's finished or a certain step count is reached. The number of steps should always be one order of magnitude greater than the total number of maze cells mainly due to backtracking. Note that it's not necessary for each maze to connect entry and exit points in this case. for i=1,12500 do local result = maze.generate() if result == 1 then    break end end Now you can access each maze cell with the maze.map variable in the following manner: local cell = maze.map[{x, y}] local cellType = cell.type local cellConnections = cell.connections Maze solving This article will show you how to use a modified Trémaux's algorithm, which is based on depth-first search and path marking. This method guarantees finding the path to the exit point if there's one. It relies on using two keys in each step: current position and neighbors. This algorithm will use three state variables—the current position, a set of visited cells, and the current path from the starting point: local currentPosition = {maze.entry.x, maze.entry.y} local visistedCells = map2D(false) local path = stack() The whole maze solving process will be placed into one loop. This algorithm is always finite, so you can use the infinite while loop. -- A placeholder for neighbours function that will be defined later local neighbours   -- testing function for passable cells local cellTestFn = function(cell, position) return (cell.type >= 1) and (not visitedCells[position]) end   -- include starting point into path visitedCells[currentPosition] = true path.push(currentPosition)   while true do local currentCell = maze.map[currentPosition] -- is current cell an exit point? if currentCell and    (currentCell.type == 3 or currentCell.type == 4) then    break else    -- have a look around and find viable cells    local possibleCells = neighbours(currentPosition, cellTestFn)    if #possibleCells > 0 then      -- let's try the first available cell      currentPosition = possibleCells[1]      visitedCells[currentPosition] = true      path.push(currentPosition)    elseif not path.empty() then      -- get back one step      currentPosition = path.pop()    else      -- there's no solution      break    end end end This fairly simple algorithm uses the neighbours function to obtain a list of cells that haven't been visited yet: -- A shorthand for direction coordinates local neighbourLocations = { [0] = {0, 1}, [1] = {1, 0}, [2] = {0, -1}, [3] = {-1, 0}, }   local function neighbours(position0, fn) local neighbours = {} local currentCell = map[position0] if type(currentCell)=='table' then    local connections = currentCell.connections    for i=0,3 do      -- is this cell connected?      if bit.band(connections, 2^i) >= 1 then        local neighbourLocation = neighbourLocations[i]        local position1 = {position0[1] + neighbourLocation[1],         position0[2] + neighbourLocation[2]}        if (position1[1]>=1 and position1[1] <= maze.width and         position1[2]>=1 and position1[2] <= maze.height) then          if type(fn)=="function" then            if fn(map[position1], position1) then              table.insert(neighbours, position1)            end          else            table.insert(neighbours, position1)          end        end      end    end end return neighbours end When this algorithm finishes, a valid path between entry and exit points is stored in the path variable represented by the stack data structure. The path variable will contain an empty stack if there's no solution for the maze. How it works… This pathfinding algorithm uses two main steps. First, it looks around the current maze cell to find cells that are connected to the current maze cell with a passage. This will result in a list of possible cells that haven't been visited yet. In this case, the algorithm will always use the first available cell from this list. Each step is recorded in the stack structure, so in the end, you can reconstruct the whole path from the exit point to the entry point. If there are no maze cells to go, it will head back to the previous cell from the stack. The most important is the neighbours function, which determines where to go from the current point. It uses two input parameters: current position and a cell testing function. It looks around the current cell in four directions in clockwise order: up, right, down, and left. There must be a passage from the current cell to each surrounding cell; otherwise, it'll just skip to the next cell. Another step determines whether the cell is within the rectangular maze region. Finally, the cell is passed into the user-defined testing function, which will determine whether to include the current cell in a list of usable cells. The maze cell testing function consists of a simple Boolean expression. It returns true if the cell has a correct cell type (not a wall) and hasn't been visited yet. A positive result will lead to inclusion of this cell to a list of usable cells. Note that even if this pathfinding algorithm finds a path to the exit point, it doesn't guarantee that this path is the shortest possible. Summary We have learned how pathfinding works in games with a simple maze.With pathfinding algorithm, you can create intelligent game opponents that won't jump into a lava lake at the first opportunity. Resources for Article: Further resources on this subject: Mesh animation [article] Getting into the Store [article] Creating a Direct2D game window class [article]
Read more
  • 0
  • 0
  • 24989

article-image-exploring-and-interacting-materials-using-blueprints
Packt
23 Jul 2015
16 min read
Save for later

Exploring and Interacting with Materials using Blueprints

Packt
23 Jul 2015
16 min read
In this article by Brenden Sewell, author of the book Blueprints Visual Scripting for Unreal Engine, we will cover the following topics: Exploring materials Creating our first Blueprint When setting out to develop a game, one of the first steps toward exploring your idea is to build a prototype. Fortunately, Unreal Engine 4 and Blueprints make it easier than ever to quickly get the essential gameplay functionality working so that you can start testing your ideas sooner. To develop some familiarity with the Unreal editor and Blueprints, we will begin by prototyping simple gameplay mechanics using some default assets and a couple of Blueprints. (For more resources related to this topic, see here.) Exploring materials Earlier, we set for ourselves the goal of changing the color of the cylinder when it is hit by a projectile. To do so, we will need to change the actor's material. A material is an asset that can be added to an actor's mesh (which defines the physical shape of the actor) to create its look. You can think of a material as paint applied on top of an actor's mesh or shape. Since an actor's material determines its color, one method of changing the color of an actor is to replace its material with a material of a different color. To do this, we will first be creating a material of our own. It will make an actor appear red. Creating materials We can start by creating a new folder inside the FirstPersonBP directory and calling it Materials. Navigate to the newly created folder and right-click inside the empty space in the content browser, which will generate a new asset creation popup. From here, select Material to create a new material asset. You will be prompted to give the new material a name, which I have chosen to call TargetRed. Material Properties and Blueprint Nodes Double-click on TargetRed to open a new editor tab for editing the material, like this: You are now looking at Material Editor, which shares many features and conventions with Blueprints. The center of this screen is called the grid, and this is where we will place all the objects that will define the logic of our Blueprints. The initial object you see in the center of the grid screen, labeled with the name of the material, is called a node. This node, as seen in the previous screenshot, has a series of input pins that other material nodes can attach to in order to define its properties. To give the material a color, we will need to create a node that will give information about the color to the input labeled Base Color on this node. To do so, right-click on empty space near the node. A popup will appear, with a search box and a long list of expandable options. This shows all the available Blueprint node options that we can add to this Material Blueprint. The search box is context sensitive, so if you start typing the first few letters of a valid node name, you will see the list below shrink to include only those nodes that include those letters in the name. The node we are looking for is called VectorParameter, so we start typing this name in the search box and click on the VectorParameter result to add that node to our grid: A vector parameter in the Material Editor allows us to define a color, which we can then attach to the Base Color input on the tall material definition node. We first need to give the node a color selection. Double-click on the black square in the middle of the node to open Color Picker. We want to give our target a bright red color when it is hit, so either drag the center point in the color wheel to the red section of the wheel, or fill in the RGB or Hex values manually. When you have selected the shade of red you want to use, click on OK. You will notice that the black box in your vector parameter node has now turned red. To help ourselves remember what parameter or property of the material our vector parameter will be defining, we should name it color. You can do this by ensuring that you have the vector parameter node selected (indicated by a thin yellow highlight around the node), and then navigating to the Details panel in the editor. Enter a value for Parameter Name, and the node label will change automatically: The final step is to link our color vector parameter node to the base material node. With Blueprints, you can connect two nodes by clicking and dragging one output pin to another node's input pin. Input pins are located on the left-hand side of a node, while output pins are always located to the right. The thin line that connects two nodes that have been connected in this way is called a wire. For our material, we need to click and drag a wire from the top output pin of the color node to the Base Color input pin of the material node, as shown in the following screenshot: Adding substance to our material We can optionally add some polish to our material by taking advantage of some of the other input pins on the material definition node. 3D objects look unrealistic with flat, single color materials applied, but we can add additional reflectiveness and depth by setting a value for the materials Metallic and Roughness inputs. To do so, right click in empty grid space and type scalar into the search box. The node we are looking for is called ScalarParameter. Once you have a scalar parameter node, select it, and go to the Details panel. A scalar parameter takes a single float value (a number with decimal values). Set Default Value to 0.1, as we want any additive effects to our material to be subtle. We should also change Parameter Name to Metallic. Finally, we click and drag the output pin from our Metallic node to the Metallic input pin of the material definition node. We want to make an additional connection to the Roughness parameter, so right-click on the Metallic node we just created and select Duplicate. This will generate a copy of that node, without the wire connection. Select this duplicate Metallic node and then change the Parameter Name field in the Details panel to Roughness. We will keep the same default value of 0.1 for this node. Now click and drag the output pin from the Roughness node to the Roughness input pin of the Material definition node. The final result of our Material Blueprint should look like what is shown in the following screenshot: We have now made a shiny red material. It will ensure that our targets will stand out when they are hit. Click on the Save button in the top-left corner of the editor to save the asset, and click again on the tab labeled FirstPersonExampleMap to return to your level. Creating our first Blueprint We now have a cylinder in the world, and the material we would like to apply to the cylinder when shot. The final piece of the interaction will be the game logic that evaluates that the cylinder has been hit, and then changes the material on the cylinder to our new red material. In order to create this behavior and add it to our cylinder, we will have to create a Blueprint. There are multiple ways of creating a Blueprint, but to save a couple of steps, we can create the Blueprint and directly attach it to the cylinder we created in a single click. To do so, make sure you have the CylinderTarget object selected in the Scene Outliner panel, and click on the blue Blueprint/Add Script button at the top of the Details panel. You will then see a path select window. For this project, we will be storing all our Blueprints in the Blueprints folder, inside the FirstPersonBP folder. Since this is the Blueprint for our CylinderTarget actor, leaving the name of the Blueprint as the default, CylinderTarget_Blueprint, is appropriate. CylinderTarget_Blueprint should now appear in your content browser, inside the Blueprints folder. Double-click on this Blueprint to open a new editor tab for the Blueprint. We will now be looking at the Viewport view of our cylinder. From here, we can manipulate some of the default properties of our actor, or add more components, each of which can contain their own logic to make the actor more complex. for creating a simple Blueprint attached to the actor directly. To do so, click on the tab labeled Event Graph above the Viewport window. Exploring the Event Graph panel The Event Graph panel should look very familiar, as it shares most of the same visual and functional elements as the Material Editor we used earlier. By default, the event graph opens with three unlinked event nodes that are currently unused. An event refers to some action in the game that acts as a trigger for a Blueprint to do something. Most of the Blueprints you will create follow this structure: Event (when) | Conditionals (if) | Actions (do). This can be worded as follows: when something happens, check whether X, Y, and Z are true, and if so, do this sequence of actions. A real-world example of this might be a Blueprint that determines whether or not I have fired a gun. The flow is like this: WHEN the trigger is pulled, IF there is ammo left in the gun, DO fire the gun. The three event nodes that are present in our graph by default are three of the most commonly used event triggers. Event Begin Play triggers actions when the player first begins playing the game. Event Actor Begin Overlap triggers actions when another actor begins touching or overlapping the space containing the existing actor controlled by the Blueprint. Event Tick triggers attached actions every time a new frame of visual content is displayed on the screen during gameplay. The number of frames that are shown on the screen within a second will vary depending on the power of the computer running the game, and this will in turn affect how often Event Tick triggers the actions. We want to trigger a "change material" action on our target every time a projectile hits it. While we could do this by utilizing the Event Actor Begin Overlap node to detect when a projectile object was overlapping with the cylinder mesh of our target, we will simplify things by detecting only when another actor has hit our target actor. Let's start with a clean slate, by clicking and dragging a selection box over all the default events and hitting the Delete key on the keyboard to delete them. Detecting a hit To create our hit detection event, right-click on empty graph space and type hit in the search box. The Event Hit node is what we are looking for, so select it when it appears in the search results. Event Hit triggers actions every time another actor hits the actor controlled by this Blueprint. Once you have the Event Hit node on the graph, you will notice that Event Hit has a number of multicolored output pins originating from it. The first thing to notice is the white triangle pin that is in the top-right corner of the node. This is the execution pin, which determines the next action to be taken in a sequence. Linking the execution pins of different nodes together enables the basic functionality of all Blueprints. Now that we have the trigger, we need to find an action that will enable us to change the material of an actor. Click and drag a wire from the execution pin to the empty space to the right of the node. Dropping a wire into empty space like this will generate a search window, allowing you to create a node and attach it to the pin you are dragging from in a single operation. In the search window that appears, make sure that the Context Sensitive box is checked. This will limit the results in the search window to only those nodes that can actually be attached to the pin you dragged to generate the search window. With Context Sensitive checked, type set material in the search box. The node we want to select is called Set Material (StaticMeshComponent). If you cannot find the node you are searching for in the context-sensitive search, try unchecking Context Sensitive to find it from the complete list of node options. Even if the node is not found in the context-sensitive search, there is still a possibility that the node can be used in conjunction with the node you are attempting to attach it to. The actions in the Event Hit node can be set like this: Swapping a material Once you have placed the Set Material node, you will notice that it is already connected via its input execution pin to the Event Hit node's output execution pin. This Blueprint will now fire the Set Material action whenever the Blueprint's actor hits another actor. However, we haven't yet set up the material that will be called when the Set Material action is called. Without setting the material, the action will fire but not produce any observable effect on the cylinder target. To set the material that will be called, click on the drop-down field labeled Select Asset underneath Material inside the Set Material node. In the asset finder window that appears, type red in the search box to find the TargetRed material we created earlier. Clicking on this asset will attach it to the Material field inside the Set Material node. We have now done everything we need with this Blueprint to turn the target cylinder red, but before the Blueprint can be saved, it must be compiled. Compiling is the process used to convert the developer-friendly Blueprint language into machine instructions that tell the computer what operations to perform. This is a hands-off process, so we don't need to concern ourselves with it, except to ensure that we always compile our Blueprint scripts after we assemble them. To do so, hit the Compile button in the top-left corner of the editor toolbar, and then click on Save. Now that we have set up a basic gameplay interaction, it is wise to test the game to ensure that everything is happening the way we expect. Click on the Play button, and a game window will appear directly above the Blueprint Editor. Try both shooting and running into the CylinderTarget actor you created. Improving the Blueprint When we run the game, we see that the cylinder target does change colors upon being hit by a projectile fired from the player's gun. This is the beginning of a framework of gameplay that can be used to get enemies to respond to the player's actions. However, you also might have noticed that the target cylinder changes color even when the player runs into it directly. Remember that we wanted the cylinder target to become red only when hit by a player projectile, and not because of any other object colliding with it. Unforeseen results like this are common whenever scripting is involved, and the best way to avoid them is to check your work by playing the game as you construct it as often as possible. To fix our Blueprint so that the cylinder target changes color only in response to a player projectile, return to the CylinderTarget_Blueprint tab and look at the Event Hit node again. The remaining output pins on the Event Hit node are variables that store data about the event that can be passed to other nodes. The color of the pins represents the kind of data variable it passes. Blue pins pass objects, such as actors, whereas red pins contain a boolean (true or false) variable. You will learn more about these pin types as we get into more complicated Blueprints; for now, we only need to concern ourselves with the blue output pin labeled Other, which contains the data about which other actor performed the hitting to fire this event. This will be useful in order for us to ensure that the cylinder target changes color only when hit by a projectile fired from the player, rather than changing color because of any other actors that might bump into it. To ensure that we are only triggering in response to a player projectile hit, click and drag a wire from the Other output pin to empty space. In this search window, type projectile. You should see some results that look similar to the following screenshot. The node we are looking for is called Cast To FirstPersonProjectile: FirstPersonProjectile is a Blueprint included in Unreal Engine 4's First Person template that controls the behavior of the projectiles that are fired from your gun. This node uses casting to ensure that the action attached to the execution pin of this node occurs only if the actor hitting the cylinder target matches the object referenced by the casting node. When the node appears, you should already see a blue wire between the Other output pin of the Event Hit node and the Object pin of the casting node. If not, you can generate it manually by clicking and dragging from one pin to the other. You should also remove the connections between the Event Hit and Set Material node execution pins so that the casting node can be linked between them. Removing a wire between two pins can be done by holding down the Alt key and clicking on a pin. Once you have linked the three nodes, the event graph should look like what is shown in the following screenshot: Now compile, save, and click on the play button again to test. This time, you should notice that the cylinder target retains its default color when you walk up and touch it, but does turn red when fired upon by your gun. Summary In this article, the skills you have learned will serve as a strong foundation for building more complex interactive behavior. You may wish to spend some additional time modifying your prototype to include a more appealing layout, or feature faster moving targets. One of the greatest benefits of Blueprint's visual scripting is the speed at which you can test new ideas, and each additional skill that you learn will unlock exponentially more possibilities for game experiences that you can explore and prototype. Resources for Article: Further resources on this subject: Creating a Brick Breaking Game [article] Configuration and Handy Tweaks for UDK [article] Unreal Development Toolkit: Level Design HQ [article]
Read more
  • 0
  • 0
  • 8649

Packt
23 Jul 2015
9 min read
Save for later

Eloquent… without Laravel!

Packt
23 Jul 2015
9 min read
In this article by, Francesco Malatesta author of the book, Learning Laravel’s Eloquent, we will learn everything about Eloquent, starting from the very basics and going through models, relationships, and other topics. You probably started to like it and think about implementing it in your next project. In fact, creating an application without a single SQL query is tempting. Maybe you also showed it to your boss and convinced him/her to use it in your next production project. However, there is a little problem. Yeah, the next project isn't so new. It already exists, and, despite everything, it doesn't use Laravel! You start to shiver. This is so sad because you passed the last week studying this new ORM, a really cool one, and then moving forward. There is always a solution! You are a developer! Also, the solution is not so hard to find. If you want, you can use Eloquent without Laravel. Actually, Laravel is not a monolithic framework. It is made up of several, separate parts, which are combined together to build something greater. However, nothing prevents you from using only selected packages in another application. (For more resources related to this topic, see here.) So, what are we going to see in this article? First of all, we will explore the structure of the database package and see what is inside it. Then, you will learn how to install the illuminate/database package separately for your project and how to configure it for the first use. Then, you will encounter some examples. First of all, we will look at the Eloquent ORM. You will learn how to define models and use them. Having done this, as a little extra, I will show you how to use the Query Builder (remember that the "illuminate/database" package isn't just Eloquent). Maybe you would also enjoy the Schema Builder class. I will cover it, don't worry! We will cover the following: Exploring the directory structure Installing and configuring the database package Using the ORM Using the Query and Schema Builders Summary Exploring the directory structure As I mentioned before, the key step in order to use Eloquent in your application without Laravel is to use the "illuminate/database" package. So, before we install it, let's examine it a little. You can see the package contents here: https://github.com/illuminate/database. So, this is what you will probably see: Folder Description Capsule The capsule manager is a fundamental component. It instantiates the service container and loads some dependencies. Connectors The database package can communicate with various DB systems. For instance, SQLite, MySQL, or PostgreSQL. Every type of database has its own connector. This is the folder in which you will find them. Console The database package isn't just Eloquent with a bunch of connectors. In this specific folder, you will find everything related to console commands, such as artisan db:seed or artisan migrate. Eloquent Every single Eloquent class is placed here. Migrations Don't confuse this with the Console folder. Every class related to migrations is stored here. When you type artisan migrate in your terminal, you are calling a class that is placed here. Query The Query Builder is placed here. Schema Everything related to the Schema Builder is placed here. In the main folder, you will also find some other files. However, don't worry as you don't need to know what they are. If you open the composer.json file, take a look at the following "require" section: "require": { "php": ">=5.4.0", "illuminate/container": "5.1.*", "illuminate/contracts": "5.1.*", "illuminate/support": "5.1.*", "nesbot/carbon": "~1.0" }, As you can see, the database package has some prerequisites that you can't avoid. However, the container is quite small, and it is the same for contracts (just some interfaces) and "illuminate/support". Eloquent uses Carbon (https://github.com/briannesbitt/Carbon) to deal with dates in a smarter way. So, if you are seeing this for the first time and you are confused, don't worry! Everything is all right. Now that you know what you can find in this package, let's see how to install it and configure it for the first time. Installing and configuring the database package Let's start with the setup. First of all, we will install the package using composer as usual. After that, we will configure the capsule manager in order to get started. Installing the package Installing the "illuminate/database" package is really easy. All you have to do is to add "illuminate/database" to the "require" section of your composer.json file, like this: "require": {     "illuminate/database": "5.0.*",   }, Then type composer update in to your terminal, and wait for a few seconds. Another way is to include it with the shortcut in your project folder, obviously from the terminal: composer require illuminate/database No matter which method you chose, you just installed the package. Configuring the package Time to use the capsule manager! In your project, you will use something like this to get started: use Illuminate\Database\Capsule\Manager as Capsule;   $capsule = new Capsule;   $capsule->addConnection([ 'driver'   => 'mysql', 'host'     => 'localhost', 'database' => 'database', 'username' => 'root', 'password' => 'password', 'charset'   => 'utf8', 'collation' => 'utf8_unicode_ci', 'prefix'   => '', ]);     // Set the event dispatcher used by Eloquent models... (optional) use Illuminate\Events\Dispatcher; use Illuminate\Container\Container; $capsule->setEventDispatcher(new Dispatcher(new Container)); The config syntax I used is exactly the same you can find in the config/database.php configuration file. The only difference is that this time you are explicitly using an instance of the capsule manager in order to do everything. In the second part of the code, I am setting up the event dispatcher. You must do this if events are required from your project. However, events are not included by default in this package, so you will have to manually add the "illuminate/events" dependency to your composer.json file. Now, the final step! Add this code to your setup file: // Make this Capsule instance available globally via static methods... (optional) $capsule->setAsGlobal();   // Setup the Eloquent ORM... (optional; unless you've used setEventDispatcher()) $capsule->bootEloquent(); With setAsGlobal() called on the capsule manager, you can set it as a global component in order to use it with static methods. You may like it or not; the choice is yours. The final line starts up Eloquent, so you will need it. However, this is also an optional instruction. In some situations you may need the Query Builder only. Then there is nothing else to do! Your application is now configured with the database package (and Eloquent)! Using the ORM Using the Eloquent ORM in a non-Laravel application is not a big change. All you have to do is to declare your model as you are used to doing. Then, you need to call it and use it as you are used to. Here is a perfect example of what I am talking about: use Illuminate\Database\Eloquent\Model;   class Book extends Model {   ...   // some attributes here… protected $table = 'my_books_table';   // some scopes here... public function scopeNewest() {    // query here... }   ...   } Exactly as you did with Laravel, the package you are using is the same. So, no worries! If you want to use the model you just created, then use the following: $books = Book::newest()->take(5)->get(); This also applies for relationships, observers, and so on. Everything is the same. In order to use the database package and ORM exactly, you would do the same thing you did in Laravel; remember to set up the project structure in a way that follows the PSR-4 autoloading convention. Using the Query and Schema Builder It's not just about the ORM; with the database package, you can also use the Query and the Schema Builders. Let's discover how! The Query Builder The Query Builder is also very easy to use. The only difference, this time, is that you are passing through the capsule manager object, like this: $books = Capsule::table('books')              ->where('title', '=', "Michael Strogoff")              ->first(); However, the result is still the same. Also, if you like the DB facade in Laravel, you can use the capsule manager class in the same way: $book = Capsule::select('select title, pages_count from books where id = ?', array(12)); The Schema Builder Now, without Laravel, you don't have migrations. However, you can still use the Schema Builder without Laravel. Like this: Capsule::schema()->create('books', function($table){     $table->increments(''id');     $table->string(''title'', 30);     $table->integer(''pages_count'');    $table->decimal(''price'', 5, 2);.   $table->text(''description'');     $table->timestamps(); }); Previously, you used to call the create() method of the Schema facade. This time is a little different: you will use the create() method, chained to the schema() method of the Capsule class. Obviously, you can use any Schema class method in this way. For instance, you could call something like following: Capsule::schema()->table('books', function($table){    $table->string('title', 50)->change();    $table->decimal('special_price', 5, 2); }); And you are good to go! Remember that if you want to unlock some Schema Builder-specific features you will need to install other dependencies. For example, do you want to rename a column? You will need the doctrine/dbal dependency package. Summary I decided to add this article because many people ask me how to use Eloquent without Laravel. Mostly because they like the framework, but they can't migrate an already started project in its entirety. Also, I think that it's cool to know, in a certain sense, what you can find under the hood. It is always just about curiosity. Curiosity opens new paths, and you have a choice to solve a problem in a new and more elegant way. In these few pages, I just scratched the surface. I want to give you some advice: explore the code. The best way to write good code is to read good code. Resources for Article: Further resources on this subject: Your First Application [article] Building a To-do List with Ajax [article] Laravel 4 - Creating a Simple CRUD Application in Hours [article]
Read more
  • 0
  • 0
  • 12315

Packt
23 Jul 2015
18 min read
Save for later

Elasticsearch – Spicing Up a Search Using Geo

Packt
23 Jul 2015
18 min read
A geo point refers to the latitude and longitude of a point on Earth. Each location on it has its own unique latitude and longitude. Elasticsearch is aware of geo-based points and allows you to perform various operations on top of it. In many contexts, it's also required to consider a geo location component to obtain various functionalities. For example, say you need to search for all the nearby restaurants that serve Chinese food or I need to find the nearest cab that is free. In some other situation, I need to find to which state a particular geo point location belongs to understand where I am currently standing. This article by Vineeth Mohan, author of the book Elasticsearch Blueprints, is modeled such that all the examples mentioned are related to real-life scenarios, of restaurant searching, for better understanding. Here, we take the example of sorting restaurants based on geographical preferences. A number of cases ranging from the simple, such as finding the nearest restaurant, to the more complex case, such as categorization of restaurants based on distance are covered in this article. What makes Elasticsearch unique and powerful is the fact that you can combine geo operation with any other normal search query to yield results clubbed with both the location data and the query data. (For more resources related to this topic, see here.) Restaurant search Let's consider creating a search portal for restaurants. The following are its requirements: To find the nearest restaurant with Chinese cuisine, which has the word ChingYang in its name. To decrease the importance of all restaurants outside city limits. To find the distance between the restaurant and current point for each of the preceding restaurant matches. To find whether the person is in a particular city's limit or not. To aggregate all restaurants within a distance of 10 km. That is, for a radius of the first 10 km, we have to compute the number of restaurants. For the next 10 km, we need to compute the number of restaurants and so on. Data modeling for restaurants Firstly, we need to see the aspects of data and model it around a JSON document for Elasticsearch to make sense of the data. A restaurant has a name, its location information, and rating. To store the location information, Elasticsearch has a provision to understand the latitude and longitude information and has features to conduct searches based on it. Hence, it would be best to use this feature. Let's see how we can do this. First, let's see what our document should look like: { "name" : "Tamarind restaurant", "location" : {      "lat" : 1.10,      "lon" : 1.54 } } Now, let's define the schema for the same: curl -X PUT "http://$hostname:9200/restaurants" -d '{    "index": {        "number_of_shards": 1,        "number_of_replicas": 1  },    "analysis":{            "analyzer":{                    "flat" : {                "type" : "custom",                "tokenizer" : "keyword",                "filter" : "lowercase"            }        }    } }'   echo curl -X PUT "http://$hostname:9200/restaurants /restaurant/_mapping" -d '{    "restaurant" : {    "properties" : {        "name" : { "type" : "string" },        "location" : { "type" : "geo_point", "accuracy" : "1km" }    }}   }' Let's now index some documents in the index. An example of this would be the Tamarind restaurant data shown in the previous section. We can index the data as follows: curl -XPOST 'http://localhost:9200/restaurants/restaurant' -d '{    "name": "Tamarind restaurant",    "location": {        "lat": 1.1,        "lon": 1.54    } }' Likewise, we can index any number of documents. For the sake of convenience, we have indexed only a total of five restaurants for this article. The latitude and longitude should be of this format. Elasticsearch also accepts two other formats (geohash and lat_lon), but let's stick to this one. As we have mapped the field location to the type geo_point, Elasticsearch is aware of what this information means and how to act upon it. The nearest hotel problem Let's assume that we are at a particular point where the latitude is 1.234 and the longitude is 2.132. We need to find the nearest restaurants to this location. For this purpose, the function_score query is the best option. We can use the decay (Gauss) functionality of the function score query to achieve this: curl -XPOST 'http://localhost:9200/restaurants/_search' -d '{ "query": {    "function_score": {      "functions": [        {          "gauss": {            "location": {              "scale": "1km",               "origin": [                1.231,                1.012              ]            }          }        }      ]    } } }' Here, we tell Elasticsearch to give a higher score to the restaurants that are nearby the referral point we gave it. The closer it is, the higher is the importance. Maximum distance covered Now, let's move on to another example of finding restaurants that are within 10 kms from my current position. Those that are beyond 10 kms are of no interest to me. So, it almost makes up to a circle with a radius of 10 km from my current position, as shown in the following map: Our best bet here is using a geo distance filter. It can be used as follows: curl -XPOST 'http://localhost:9200/restaurants/_search' -d '{ "query": {    "filtered": {      "filter": {        "geo_distance": {          "distance": "100km",          "location": {            "lat": 1.232,            "lon": 1.112          }        }      }    } } }' Inside city limits Next, I need to consider only those restaurants that are inside a particular city limit; the rest are of no interest to me. As the city shown in the following map is rectangle in nature, this makes my job easier: Now, to see whether a geo point is inside a rectangle, we can use the bounding box filter. A rectangle is marked when you feed the top-left point and bottom-right point. Let's assume that the city is within the following rectangle with the top-left point as X and Y and the bottom-right point as A and B: curl -XPOST 'http://localhost:9200/restaurants/_search' -d '{ "query": {    "filtered": {      "query": {        "match_all": {}      },      "filter": {        "geo_bounding_box": {          "location": {            "top_left": {              "lat": 2,              "lon": 0            },            "bottom_right": {              "lat": 0,              "lon": 2            }          }        }      }    } } }' Distance values between the current point and each restaurant Now, consider the scenario where you need to find the distance between the user location and each restaurant. How can we achieve this requirement? We can use scripts; the current geo coordinates are passed to the script and then the query to find the distance between each restaurant is run, as in the following code. Here, the current location is given as (1, 2): curl -XPOST 'http://localhost:9200/restaurants/_search?pretty' -d '{ "script_fields": {    "distance": {      "script": "doc['"'"'location'"'"'].arcDistanceInKm(1, 2)"    } }, "fields": [    "name" ], "query": {    "match": {      "name": "chinese"    } } }' We have used the function called arcDistanceInKm in the preceding query, which accepts the geo coordinates and then returns the distance between that point and the locations satisfied by the query. Note that the unit of distance calculated is in kilometers (km). You might have noticed a long list of quotes and double quotes before and after location in the script mentioned previously. This is the standard format and if we don't use this, it would result in returning the format error while processing. The distances are calculated from the current point to the filtered hotels and are returned in the distance field of response, as shown in the following code: { "took" : 3, "timed_out" : false, "_shards" : {    "total" : 1,    "successful" : 1,    "failed" : 0 }, "hits" : {    "total" : 2,    "max_score" : 0.7554128,    "hits" : [ {      "_index" : "restaurants",      "_type" : "restaurant",      "_id" : "AU08uZX6QQuJvMORdWRK",      "_score" : 0.7554128,      "fields" : {        "distance" : [ 112.92927483176413 ],        "name" : [ "Great chinese restaurant" ]      }    }, {      "_index" : "restaurants",      "_type" : "restaurant",      "_id" : "AU08uZaZQQuJvMORdWRM",      "_score" : 0.7554128,      "fields" : {        "distance" : [ 137.61635969665923 ],        "name" : [ "Great chinese restaurant" ]      }    } ] } } Note that the distances measured from the current point to the hotels are direct distances and not road distances. Restaurant out of city limits One of my friends called me and asked me to join him on his journey to the next city. As we were leaving the city, he was particular that he wants to eat at some restaurant off the city limits, but outside the next city. For this, the requirement was translated to any restaurant that is minimum 15 kms and a maximum of 100 kms from the center of the city. Hence, we have something like a donut in which we have to conduct our search, as show in the following map: The area inside the donut is a match, but the area outside is not. For this donut area calculation, we have the geo_distance_range filter to our rescue. Here, we can apply the minimum distance and maximum distance in the fields from and to to populate the results, as shown in the following code: curl -XPOST 'http://localhost:9200/restaurants/_search' -d '{ "query": {    "filtered": {      "query": {        "match_all": {}      },      "filter": {        "geo_distance_range": {          "from": "15km",          "to": "100km",          "location": {            "lat": 1.232,            "lon": 1.112          }        }      }    } } }' Restaurant categorization based on distance In an e-commerce solution, to search restaurants, it's required that you increase the searchable characteristics of the application. This means that if we are able to give a snapshot of results other than the top-10 results, it would add to the searchable characteristics of the search. For example, if we are able to show how many restaurants serve Indian, Thai, or other cuisines, it would actually help the user to get a better idea of the result set. In a similar manner, if we can tell them if the restaurant is near, at a medium distance, or far away, we can really pull a chord in the restaurant search user experience, as shown in the following map: Implementing this is not hard, as we have something called the distance range aggregation. In this aggregation type, we can handcraft the range of distance we are interested in and create a bucket for each of them. We can also define the key name we need, as shown in the following code: curl -XPOST 'http://localhost:9200/restaurants/_search' -d '{ "aggs": {    "distanceRanges": {      "geo_distance": {        "field": "location",        "origin": "1.231, 1.012",        "unit": "meters",        "ranges": [          {            "key": "Near by Locations",            "to": 200          },          {            "key": "Medium distance Locations",            "from": 200,            "to": 2000          },          {            "key": "Far Away Locations",            "from": 2000          }        ]      }    } } }' In the preceding code, we categorized the restaurants under three distance ranges, which are the nearby hotels (less than 200 meters), medium distant hotels (within 200 meters to 2,000 meters), and the far away ones (greater than 2,000 meters). This logic was translated to the Elasticsearch query using which, we received the results as follows: { "took": 44, "timed_out": false, "_shards": {    "total": 1,    "successful": 1,    "failed": 0 }, "hits": {    "total": 5,    "max_score": 0,    "hits": [         ] }, "aggregations": {    "distanceRanges": {      "buckets": [        {          "key": "Near by Locations",          "from": 0,          "to": 200,          "doc_count": 1        },        {          "key": "Medium distance Locations",          "from": 200,          "to": 2000,        "doc_count": 0        },        {          "key": "Far Away Locations",          "from": 2000,          "doc_count": 4        }      ]    } } } In the results, we received how many restaurants are there in each distance range indicated by the doc_count field. Aggregating restaurants based on their nearness In the previous example, we saw the aggregation of restaurants based on their distance from the current point to three different categories. Now, we can consider another scenario in which we classify the restaurants on the basis of the geohash grids that they belong to. This kind of classification can be advantageous if the user would like to get a geographical picture of how the restaurants are distributed. Here is the code for a geohash-based aggregation of restaurants: curl -XPOST 'http://localhost:9200/restaurants/_search?pretty' -d '{ "size": 0, "aggs": {    "DifferentGrids": {      "geohash_grid": {        "field": "location",        "precision": 6      },      "aggs": {        "restaurants": {          "top_hits": {}        }      }    } } }' You can see from the preceding code that we used the geohash aggregation, which is named as DifferentGrids and the precision here, is to be set as 6. The precision field value can be varied within the range of 1 to 12, with 1 being the lowest and 12 being the highest reference of precision. Also, we used another aggregation named restaurants inside the DifferentGrids aggregation. The restaurant aggregation uses the top_hits query to fetch the aggregated details from the DifferentGrids aggregation, which otherwise, would return only the key and doc_count values. So, running the preceding code gives us the following result: {    "took":5,    "timed_out":false,    "_shards":{      "total":1,      "successful":1,      "failed":0    },    "hits":{      "total":5,      "max_score":0,      "hits":[        ]    },    "aggregations":{      "DifferentGrids":{          "buckets":[            {                "key":"s009",               "doc_count":2,                "restaurants":{... }            },            {                "key":"s01n",                "doc_count":1,                "restaurants":{... }            },            {                "key":"s00x",                "doc_count":1,                "restaurants":{... }            },            {                "key":"s00p",                "doc_count":1,                "restaurants":{... }            }          ]      }    } } As we can see from the response, there are four buckets with the key values, which are s009, s01n, s00x, and s00p. These key values represent the different geohash grids that the restaurants belong to. From the preceding result, we can evidently say that the s009 grid contains two restaurants inside it and all the other grids contain one each. A pictorial representation of the previous aggregation would be like the one shown on the following map: Summary We found that Elasticsearch can handle geo point and various geo-specific operations. A few geospecific and geopoint operations that we covered in this article were searching for nearby restaurants (restaurants inside a circle), searching for restaurants within a range (restaurants inside a concentric circle), searching for restaurants inside a city (restaurants inside a rectangle), searching for restaurants inside a polygon, and categorization of restaurants by the proximity. Apart from these, we can use Kibana, a flexible and powerful visualization tool provided by Elasticsearch for geo-based operations. Resources for Article: Further resources on this subject: Elasticsearch Administration [article] Extending ElasticSearch with Scripting [article] Indexing the Data [article]
Read more
  • 0
  • 0
  • 3814
article-image-introduction-mastering-javascript-promises-and-its-implementation-angularjs
Packt
23 Jul 2015
21 min read
Save for later

An Introduction to Mastering JavaScript Promises and Its Implementation in Angular.js

Packt
23 Jul 2015
21 min read
In this article by Muzzamil Hussain, the author of the book Mastering JavaScript Promises, introduces us to promises in JavaScript and its implementation in Angular.js. (For more resources related to this topic, see here.) For many of us who are working with JavaScript, we all know that working with JavaScript means you must have to be a master is asynchronous coding but this skill doesn't come easily. You have to understand callbacks and when you learn it, a sense of realization started to bother you that managing callbacks is not a very easy task, and it's really not an effective way of asynchronous programming. Those of you who already been through this experience, promises is not that new; even if you haven't used it in your recent project, but you would really want to go for it. For those of you who neither use any of callbacks or promises, understanding promises or seeking difference between callbacks and promise would be a hard task. Some of you have used promises in JavaScript during the use of popular and mature JavaScript libraries such as Node.js, jQuery, or WinRT. You are already aware of the advantages of promises and how it's helping out in making your work efficient and code look beautiful. For all these three classes of professionals, gathering information on promises and its implementation in different libraries is quite a task and much of the time you spent is on collecting the right information about how you can attach an error handler in promise, what is a deferred object, and how it can pass it on to different function. Possession of right information in the time you need is the best virtue one could ask for. Keeping all these elements in mind, we have written a book named Mastering JavaScript Promises. This book is all about JavaScript and how promises are implemented in some of the most renowned libraries of the world. This book will provide a foundation for JavaScript, and gradually, it will take you through the fruitful journey of learning promises in JavaScript. The composition of chapters in this book are engineered in such a way that it provides knowledge from the novice level to an advance level. The book covers a wide range of topics with both theoretical and practical content in place. You will learn about evolution of JavaScript, the programming models of different kinds, the asynchronous model, and how JavaScript uses it. The book will take you right into the implementation mode with a whole lot of chapters based on promises implementation of WinRT, Node.js, Angular.js, and jQuery. With easy-to-follow example code and simple language, you will absorb a huge amount information on this topic. Needless to say, books on such topics are in itself an evolutionary process, so your suggestions are more than welcome. Here are few extracts from the book to give you a glimpse of what we have in store for you in this book, but most of the part in this section will focus on Angular.js and how promises are implemented in it. Let's start our journey to this article with programming models. Models Models are basically templates upon which the logics are designed and fabricated within a compiler/interpreter of a programming language so that software engineers can use these logics in writing their software logically. Every programming language we use is designed on a particular programming model. Since software engineers are asked to solve a particular problem or to automate any particular service, they adopt programming languages as per the need. There is no set rule that assigns a particular language to create products. Engineers adopt any language based on the need. The asynchronous programming model Within the asynchronous programming model, tasks are interleaved with one another in a single thread of control. This single thread may have multiple embedded threads and each thread may contain several tasks linked up one after another. This model is simpler in comparison to the threaded case, as the programmers always know the priority of the task executing at a given slot of time in memory. Consider a task in which an OS (or an application within OS) uses some sort of a scenario to decide how much time is to be allotted to a task, before giving the same chance to others. The behavior of the OS of taking control from one task and passing it on to another task is called preempting. Promise The beauty of working with JavaScript's asynchronous events is that the program continues its execution, even when it doesn't have any value it needs to work that is in progress. Such scenarios are named as yet known values from unfinished work. This can make working with asynchronous events in JavaScript challenging. Promises are a programming construct that represents a value that is still unknown. Promises in JavaScript enable us to write asynchronous code in a parallel manner to synchronous code. How to implement promises So far, we have learned the concept of promise, its basic ingredients, and some of the basic functions it has to offer in nearly all of its implementations, but how are these implementations using it? Well, it's quite simple. Every implementation, either in the language or in the form of a library, maps the basic concept of promises. It then maps it to a compiler/interpreter or in code. This allows the written code or functions to behave in the paradigm of promise, which ultimately presents its implementations. Promises are now part of the standard package for many languages. The obvious thing is that they have implemented it in their own way as per the need. Implementing promises in Angular.js Promise is all about how async behavior can be applied on a certain part of an application or on the whole. There is a list of many other JavaScript libraries where the concept of promises exists but in Angular.js, it's present in a much more efficient way than any other client-side applications. Promises comes in two flavors in Angular.js, one is $q and the other is Q. What is the difference between them? We will explore it in detail in the following sections. For now, we will look at what promise means to Angular.js. There are many possible ways to implement promises in Angular.js. The most common one is to use the $q parameter, which is inspired by Chris Kowal's Q library. Mainly, Angular.js uses this to provide asynchronous methods' implementations. With Angular.js, the sequence of services is top to bottom starting with $q, which is considered as the top class; within it, many other subclasses are embedded, for example, $q.reject() or $q.resolve(). Everything that is related to promises in Angular.js must follow the $q parameters. Starting with the $q.when() method, it seems like it creates a method immediately rather it only normalizes the value that may or may not create the promise object. The usage of $q.when() is based on the value supplied to it. If the value provided is a promise, $q.when() will do its job and if it's not, a promise value, $q.when() will create it. The schematics of using promises in Angular.js Since Chris Kowal's Q library is the global provider and inspiration of promises callback returns, Angular.js also uses it for its promise implementations. Many of Angular.js services are by nature promise oriented in return type by default. This includes $interval, $http, and $timeout. However, there is a proper mechanism of using promises in Angular.js. Look at the following code and see how promises maps itself within Angular.js: var promise = AngularjsBackground(); promise.then( function(response) {    // promise process }, function(error) {    // error reporting }, function(progress) {    // send progress    }); All of the mentioned services in Angular.js return a single object of promise. They might be different in taking parameters in, but in return all of them respond back in a single promise object with multiple keys. For example, $http.get returns a single object when you supply four parameters named data, status, header, and config. $http.get('/api/tv/serials/sherlockHolmes ') .success(function(data, status, headers, config) {    $scope.movieContent = data; }); If we employ the promises concept here, the same code will be rewritten as: var promise = $http.get('/api/tv/serials/sherlockHolmes ') promise.then( function(payload) {    $scope.serialContent = payload.data; }); The preceding code is more concise and easier to maintain than the one before this, which makes the usage of Angular.js more adaptable to the engineers using it. Promise as a handle for callback The implementation of promise in Angular.js defines your use of promise as a callback handle. The implementations not only define how to use promise for Angular.js, but also what steps one should take to make the services as "promise-return". This states that you do something asynchronously, and once your said job is completed, you have to trigger the then() service to either conclude your task or to pass it to another then() method: /asynchronous _task.then().then().done(). In simpler form, you can do this to achieve the concept of promise as a handle for call backs: angular.module('TVSerialApp', []) .controller('GetSerialsCtrl',    function($log, $scope, TeleService) {      $scope.getserialListing = function(serial) {        var promise =          TeleService.getserial('SherlockHolmes');        promise.then(          function(payload) {            $scope.listingData = payload.data;          },          function(errorPayload) {            $log.error('failure loading serial', errorPayload);        });      }; }) .factory('TeleService', function($http) {    return {      getserial: function(id) {        return $http.get(''/api/tv/serials/sherlockHolmes' + id);      }    } }); Blindly passing arguments and nested promises Whatever service of promise you use, you must be very sure of what you are passing and how this can affect the overall working of your promise function. Blindly passing arguments can cause confusion for the controller as it has to deal with its own results too while handling other requests. Say we are dealing with the $http.get service and you blindly pass too much of load to it. Since it has to deal with its own results too in parallel, it might get confused, which may result in callback hell. However, if you want to post-process the result instead, you have to deal with an additional parameter called $http.error. In this way, the controller doesn't have to deal with its own result, and calls such as 404 and redirects will be saved. You can also redo the preceding scenario by building your own promise and bringing back the result of your choice with the payload that you want with the following code: factory('TVSerialApp', function($http, $log, $q) { return {    getSerial: function(serial) {      var deferred = $q.defer();      $http.get('/api/tv/serials/sherlockHolmes' + serial)        .success(function(data) {          deferred.resolve({            title: data.title,            cost: data.price});        }).error(function(msg, code) {            deferred.reject(msg);            $log.error(msg, code);        });        return deferred.promise;    } } }); By building a custom promise, you have many advents. You can control inputs and output calls, log the error messages, transform the inputs into desired outputs, and share the status by using the deferred.notify(mesg) method. Deferred objects or composed promises Since custom promise in Angular.js can be hard to handle sometimes and can fall into malfunction in the worse case, the promise provides another way to implement itself. It asks you to transform your response within a then method and returns a transformed result to the calling method in an autonomous way. Considering the same code we used in the previous section: this.getSerial = function(serial) {    return $http.get('/api/tv/serials/sherlockHolmes'+ serial)        .then(                function (response) {                    return {                        title: response.data.title,                        cost: response.data.price                      });                  }); }; The output we yield from the preceding method will be a chained, promised, and transformed. You can again reuse the output for another output, chain it to another promise, or simply display the result. The controller can then be transformed into the following lines of code: $scope.getSerial = function(serial) { service.getSerial(serial) .then(function(serialData) {    $scope.serialData = serialData; }); }; This has significantly reduced the lines of code. Also, this helps us in maintaining the service level since the automechanism of failsafe in then() will help it to be transformed into failed promise and will keep the rest of the code intact. Dealing with the nested calls While using internal return values in the success function, promise code can sense that you are missing one most obvious thing: the error controller. The missing error can cause your code to stand still or get into a catastrophe from which it might not recover. If you want to overcome this, simply throw the errors. How? See the following code: this.getserial = function(serial) {    return $http.get('/api/tv/serials/sherlockHolmes' + serial)        .then(            function (response) {                return {                    title: response.data.title,                    cost: response.data.price               });            },            function (httpError) {                // translate the error                throw httpError.status + " : " +                    httpError.data;            }); }; Now, whenever the code enters into an error-like situation, it will return a single string, not a bunch of $http statutes or config details. This can also save your entire code from going into a standstill mode and help you in debugging. Also, if you attached log services, you can pinpoint the location that causes the error. Concurrency in Angular.js We all want to achieve maximum output at a single slot of time by asking multiple services to invoke and get results from them. Angular.js provides this functionality via its $q.all service; you can invoke many services at a time and if you want to join all/any of them, you just need then() to get them together in the sequence you want. Let's get the payload of the array first: [ { url: 'myUr1.html' }, { url: 'myUr2.html' }, { url: 'myUr3.html' } ] And now this array will be used by the following code: service('asyncService', function($http, $q) {      return {        getDataFrmUrls: function(urls) {          var deferred = $q.defer();          var collectCalls = [];          angular.forEach(urls, function(url) {            collectCalls.push($http.get(url.url));          });            $q.all(collectCalls)          .then(            function(results) {            deferred.resolve(              JSON.stringify(results))          },          function(errors) {          deferred.reject(errors);          },          function(updates) {            deferred.update(updates);          });          return deferred.promise;        }      }; }); A promise is created by executing $http.get for each URL and is added to an array. The $q.all function takes the input of an array of promises, which will then process all results into a single promise containing an object with each answer. This will get converted in JSON and passed on to the caller function. The result might be like this: [ promiseOneResultPayload, promiseTwoResultPayload, promiseThreeResultPayload ] The combination of success and error The $http returns a promise; you can define its success or error depending on this promise. Many think that these functions are a standard part of promise—but in reality, they are not as they seem to be. Using promise means you are calling then(). It takes two parameters—a callback function for success and a callback function for failure. Imagine this code: $http.get("/api/tv/serials/sherlockHolmes") .success(function(name) {    console.log("The tele serial name is : " + name); }) .error(function(response, status) {    console.log("Request failed " + response + " status code: " +     status); }; This can be rewritten as: $http.get("/api/tv/serials/sherlockHolmes") .success(function(name) {    console.log("The tele serial name is : " + name); }) .error(function(response, status) {    console.log("Request failed " + response + " status code: " +     status); };   $http.get("/api/tv/serials/sherlockHolmes") .then(function(response) {    console.log("The tele serial name is :" + response.data); }, function(result) {    console.log("Request failed : " + result); }; One can use either the success or error function depending on the choice of a situation, but there is a benefit in using $http—it's convenient. The error function provides response and status, and the success function provides the response data. This is not considered as a standard part of a promise. Anyone can add their own versions of these functions to promises, as shown in the following code: //my own created promise of success function   promise.success = function(fn) {    promise.then(function(res) {        fn(res.data, res.status, res.headers, config);    });    return promise; };   //my own created promise of error function   promise.error = function(fn) {      promise.then(null, function(res) {        fn(res.data, res.status, res.headers, config);    });    return promise; }; The safe approach So the real matter of discussion is what to use with $http? Success or error? Keep in mind that there is no standard way of writing promise; we have to look at many possibilities. If you change your code so that your promise is not returned from $http, when we load data from a cache, your code will break if you expect success or error to be there. So, the best way is to use then whenever possible. This will not only generalize the overall approach of writing promise, but also reduce the prediction element from your code. Route your promise Angular.js has the best feature to route your promise. This feature is helpful when you are dealing with more than one promise at a time. Here is how you can achieve routing through the following code: $routeProvider .when('/api/', {      templateUrl: 'index.php',      controller: 'IndexController' }) .when('/video/', {      templateUrl: 'movies.php',      controller: 'moviesController' }) As you can observe, we have two routes: the api route takes us to the index page, with IndexController, and the video route takes us to the movie's page. app.controller('moviesController', function($scope, MovieService) {    $scope.name = null;      MovieService.getName().then(function(name) {        $scope.name = name;    }); }); There is a problem, until the MovieService class gets the name from the backend, the name is null. This means if our view binds to the name, first it's empty, then its set. This is where router comes in. Router resolves the problem of setting the name as null. Here's how we can do it: var getName = function(MovieService) {        return MovieService.getName();    };   $routeProvider .when('/api/', {      templateUrl: 'index.php',      controller: 'IndexController' }) .when('/video/', {      templateUrl: 'movies.php',      controller: 'moviesController' }) After adding the resolve, we can revisit our code for a controller: app.controller('MovieController', function($scope, getName) {      $scope.name = name;   }); You can also define multiple resolves for the route of your promises to get the best possible output: $routeProvider .when('/video', {      templateUrl: '/MovieService.php',      controller: 'MovieServiceController',      // adding one resole here      resolve: {          name: getName,          MovieService: getMovieService,          anythingElse: getSomeThing      }      // adding another resole here        resolve: {          name: getName,          MovieService: getMovieService,          someThing: getMoreSomeThing      } }) An introduction to WinRT Our first lookout for the technology is WinRT. What is WinRT? It is the short form for Windows Runtime. This is a platform provided by Microsoft to build applications for Windows 8+ operating system. It supports application development in C++/ICX, C# (C sharp), VB.NET, TypeScript, and JavaScript. Microsoft adopted JavaScript as one of its prime and first-class tools to develop cross-browser apps and for the development on other related devices. We are now fully aware of what the pros and cons of using JavaScript are, which has brought us here to implement the use of promise. Summary This article/post is just to give an understanding of what we have in our book for you; focusing on just Angular.js doesn't mean we have only one technology covered in the entire book for implementation of promise, it's just to give you an idea about how the flow of information goes from simple to advanced level, and how easy it is to keep on following the context of chapters. Within this book, we have also learned about Node.js, jQuery, and WinRT so that even readers from different experience levels can read, understand, and learn quickly and become an expert in promises. Resources for Article: Further resources on this subject: Optimizing JavaScript for iOS Hybrid Apps [article] Installing jQuery [article] Cordova Plugins [article]
Read more
  • 0
  • 0
  • 7143

article-image-role-management
Packt
23 Jul 2015
15 min read
Save for later

Role Management

Packt
23 Jul 2015
15 min read
In this article by Gavin Henrick and Karen Holland, author of the book Moodle Administration Essentials, roles play a key part in the ability of the Moodle site. They are able to restrict the access of users to only the data they should have access to, and whether or not they are able to alter it or add to it. In each course, every user will have been assigned a role when they are enrolled, such as teacher, student, or customized role. In this article, we deal with the essential areas of role management that every administrator may have to deal with: Cloning a role Creating a new role Creating a course requester role Overriding a permission in a role in a course Testing a role Manually adding a role to a user in a course Enabling self-enrolment for a course (For more resources related to this topic, see here.) Understanding terminologies There are some key terms used to describe users' abilities in Moodle and how they are defined, which are as follows: Role: A role is a set or collection of permissions on different capabilities. There are default roles like teacher and student, which have predefined sets of permissions. Capability: A capability is a specific behavior in Moodle, such as Start new discussions (mod/forum:startdiscussion), which can have a permission set within a role such as Allow or Not set/Inherit. Permission: Permission is associated with a capability. There are four possible values: allow, prevent, prohibit, or not set. Not set: This means that that there is not a specific setting for this user role, and Moodle will determine if it is allowed, if set in a higher context. Allow: The permission is explicitly granted for the capability. Prevent: The permission is removed for the capability, even if allowed in a higher context. However, it can be overridden at a specific context. Prohibit: The permission is completely denied and cannot be overridden at any lower context. By default, the only configuration option displayed is Allow. To show the full list of options in the role edit page, click on the Show advanced button, just above the Filter option, as shown in the following image: Context: A context is an area of Moodle, such as the whole system, a category, a course, an activity, a block, or a user. A role will have permission for a capability on a specific context. An example of this will be where a student can start a discussion in a specific forum. This is set up by enabling a permission to Allow for the capability Start new discussions for a Student role on that specific forum. Standard roles There are a number of different roles configured in Moodle, by default these are: Site administrator: The site administrator can do everything on the site including creating the site structure, courses, activities, and resources, and managing user accounts. Manager: The manager can access courses and modify them. They usually do not participate in teaching courses. Course creator: The course creator can create courses when assigned rights in a category. Teacher: The teacher can do anything within a course, including adding and removing resources and activities, communicating with students, and grading them. Non-editing teacher: The non-editing teacher can teach and communicate in courses and grade students, but cannot alter or add activities, nor change the course layout or settings. Student: The student can access and participate in courses, but cannot create or edit resources or activities within a course. Guest: The guest can view courses if allowed, but cannot participate. Guests have minimal privileges, and usually cannot enter text anywhere. Authenticated user: The role all logged in users get. Authenticated user on the front page role: A logged in user role for the front page only. Managing role permissions Let's learn how to manage permissions for existing roles in Moodle. Cloning a role It is possible to duplicate an existing role in Moodle. The main reasons for doing this will be so that you can have a variation of the existing role, such as a teacher, but with the role having reduced capabilities. For instance, to stop a teacher being able to add or remove students to the course, this process will be achieved by creating a course editing role, which is a clone of the standard editingteacher role with enrolment aspects removed. This is typically done when students are added to courses centrally with a student management system. To duplicate a role, in this case editing teacher: Log in as an administrator level user account. In the Administration block, navigate to Site administration | Users | Permissions | Define roles. Click on the Add a new role button. Select an existing role from the Use role or archetype dropdown. Click on Continue. Enter the short role name in the Short name field. This must be unique. Enter the full role name in the Custom full name field. This is what appears on the user interface in Moodle. Enter an explanation for the role in the Description field. This should explain why the role was created, and what changes from default were planned. Scroll to the bottom of the page. Click on Create this role. This will create a duplicate version of the teacher role with all the same permissions and capabilities. Creating a new role It is also possible to create a new role. The main reason for doing this would be to have a specific role to do a specific task and nothing else, such as a user that can manage users only. This is the alternative to cloning one of the existing roles, and then disabling everything except the one set of capabilities required. To create a new role: Log in as an administrator level user account. In the Administration block, navigate to Site administration | Users | Permissions | Define roles. Click on the Add a new role button. Select No role from the Use role or archetype dropdown. Click on Continue. Enter the short role name in the Short name field. This must be unique. Enter the full role name in the Custom full name field. This is what appears on the user interface in Moodle. Enter an explanation for the role in the Description field. This should explain why the role was created. Select the appropriate Role archetype, in this case, None. The role archetype determines the permissions when a role is reset to default and any new permissions for the role when the site is upgraded. Select Context types where this role may be assigned. Set the permissions as required by searching for the appropriate Capability and clicking on Allow. Scroll to the bottom of the page. Click on Create this role. This will create the new role with the settings as defined. If you want the new role to appear in the course listing, you must enable it by navigating to Administration block | Site administration | Appearance | Courses | Course Contacts. Creating a course requester role There is a core Moodle feature that enables users to request a course to be created. This is not normally used, especially as students and most teachers just have responsibility within their own course context. So, it can be useful to create a role just with this ability, so that a faculty or department administrator can request a new course space when needed, without giving the ability to all users. There are a few steps in this process: Remove the capability from other roles. Set up the new role. Assign the role to a user at the correct context. Firstly, we remove the capability from other roles by altering the authenticated user role as shown: In the Administration block, navigate to Site administration | Users | Permissions | Define roles. Click on edit for the Authenticated user role. Enter the request text into Filter. Select the Not set radio button under moodle/course:request to change the Allow permission. Scroll to the bottom of the page. Click on Save changes. Next, we create the new role with the specific capability set to Allow. In the Administration block, navigate to Site administration | Users | Permissions | Define roles. Click on the Add a new role button. Select No role from the Use role or archetype dropdown. Click on Continue. Enter courserequester in the Short name field. Enter Course Requester in the Custom full name field. This is what appears on the user interface in Moodle. Enter the explanation for the role in the Description field. Select system under Context types where this role may be assigned. Change moodle/course:request to Allow. Scroll to the bottom of the page. Click on Create this role. Lastly, you assign the role to a user at system level. This is different from giving a role to a user in a course. In the Administration block, navigate to Site administration | Users | Permissions | Assign system roles. Click on Course Requester. Search for the specific user in the Potential user list. Select the user from the list, using the Search filter if required. Click on the Add button. Any roles you assign from this page will apply to the assigned users throughout the entire system, including the front page and all the courses. Applying a role override for a specific context You can change how a specific role behaves in a certain context by enabling an override, thereby granting or removing, the permission in that context. An example of this is, in general, students cannot rate a forum post in a forum in their course. When ratings are enabled, only the manager, teacher, and non-editing teacher roles are those with permission to rate the posts. So, to enable the students to rate posts, you need to change the permissions for the student role on that specific forum. Browse to the forum where you want to allow students to rate forum posts. This process assumes that the rating has been already enabled in the forum. From the Forum page, go to the Administration block, then to Forum administration, and click on the link to Permissions. Scroll down the page to locate the permission Rate posts. This is the mod/forum:rate capability. By default, you should not see the student role listed to the right of the permission. Click on the plus sign (+) that appears below the roles already listed for the Rate posts permission. Select Student from the Select role menu, and click on the Allow button. Student should now appear in the list next to the Rate posts permission. Participants will now be able to rate each other's posts in this forum. Making the change in this forum does not impact other forums. Testing a role It is possible to use the Switch role to feature to see what the other role behaves like in the different contexts. However, the best way to test a role is to create a new user account, and then assign that user the role in the correct context as follows: Create the new user by navigating to Site administration | Users | Accounts | Add a new user. Assign your new user the role in the correct context, such as system roles or in a course as required. Log in with this user in a different browser to check what they can do / see. Having two different roles logged in at the same time, each using a different browser, means that you can test the new role in one browser while still logged in as the administrator in your main browser. This saves so much time when building courses especially. Manually adding a user to a course Depending on what your role is on a course, you can add other users to the course by manually enrolling them to the course. In this example, we are logged in as the administrator, which can add a number of roles, including: Manager Teacher Non-Editing Teacher Student To enrol a user in your course: Go to the Course administration menu in the Administration block. Expand on the User settings. Click on the Enrolled users link. This brings up the enrolled users page that lists all enrolled users—this can be filtered by role and by default shows the enrolled participants only. Click on the Enrol users button. From the Assign roles dropdown, select which role you want to assign to the user. This is limited to the roles which you can assign. Search for the user that you want to add to the course. Click on the Enrol button to enroll the user with the assigned role. Click on Finish enrolling users. The page will now reload with the new enrolments. To see the users that you added with the given role, you may need to change the filter to the specific role type. This is how you manually add someone to a Moodle course. User upload CSV files allow you to include optional enrolment fields, which will enable you to enroll existing or new users. The sample user upload CSV file will enroll each user as a student to both specified courses, identified by their course shortnames: Teaching with Moodle, and Induction. Enabling self-enrolment for a course In addition to manually adding users to a course, you can configure a course so that students can self-enroll onto the course, either with or without an enrolment key or password. There are two dependencies required for this to work: Firstly, the self-enrolment plugin needs to be enabled at the site level. This is found in the Administration block, by navigating to Site Administration | Plugins | Enrolments | Manage enroll plugins. If it is not enabled, you need to click on the eye icon to enable it. It is enabled by default in Moodle. Secondly, you need to enable the self-enrolment method in the course itself, and configure it accordingly. In the course that you want to enable self-enrolment, the following are the essential steps: In the Administration block, navigate to Administration | Course administration | Users | Enrolment methods. Click on the eye icon to turn on the Self enrolment method. Click on the cogwheel icon to access the configuration for the Self enrolment method. You can optionally enter a name for the method into the Custom instance name field; however, this is not required. Typically, you would do this if you are enabling multiple self-enrolment options and want to identify them separately. Enter an enrolment key or password into the Enrolment key field if you want to restrict self-enrolment to those who are issued the password. Once the user knows the password, they will be able to enroll. If you are using groups in your course, and configure them with different passwords for each group, it is possible to use the Use group enrolment keys option to use those passwords from the different groups to automatically place the self-enrolling users into those groups when they enroll, using the correct key/password. If you want the self-enrolment to enroll users as students, leave the Default assigned role as Student, or change it to whichever role you intend it to operate for. Some organizations will give one password for the students to enrol with and another for the teachers to enrol with, so that the organization does not need to manage the enrolment centrally. So, having two self-enrolment methods set up, one pointing at student and one at teacher, makes this possible. If you want to control the length of enrolment, you can do this by setting the Enrolment duration column. In this case, you can also issue a warning to the user before it expires by using Notify before enrolment expires and Notification threshold options. If you want to specify the length of the enrolment, you can set this with Start date and End date. You can un-enroll the user if they are inactive for a period of time by setting Unenroll to inactive after a specific number of days. Set Max enrolled users if you want to limit the number of users using this specific password to enroll to the course. This is useful if you are selling a specific number of seats on the course. This self-enrolment method may be restricted to members of a specified cohort only. You can enable this by selecting a cohort from the dropdown for the Only cohort members setting. Leave Send course welcome message ticked if you want to send a message to those who self-enroll. This is recommended. Enter a welcome message in the Custom welcome message field. This will be sent to all users who self-enroll using this method, and can be used to remind them of key information about the course, such as a starting date, or asking them to do something like complete the icebreaker in the course. Click on Save changes. Once enabled and configured, users will now be able to self-enroll and will be added as whatever role you selected. This is dependent on the course itself being visible to the users when browsing the site. Other custom roles Moodle docs has a list of potential custom roles with instructions on how to create them including: Parent Demo teacher Forum moderator Forum poster role Calendar editor Blogger Quiz user with unlimited time Question Creator Question sharer Course requester role Feedback template creator Grading forms publisher Grading forms manager Grade view Gallery owner role For more information, check https://docs.moodle.org/29/en/Creating_custom_roles. Summary In this article, we looked at the core administrator tasks in role management, and the different aspects to consider when deciding which approach to take in either extending or reducing role permissions. Resources for Article: Further resources on this subject: Moodle for Online Communities [article] Configuring your Moodle Course [article] Moodle Plugins [article]
Read more
  • 0
  • 0
  • 2102

article-image-programmable-dc-motor-controller-lcd
Packt
23 Jul 2015
23 min read
Save for later

Programmable DC Motor Controller with an LCD

Packt
23 Jul 2015
23 min read
In this article by Don Wilcher author of the book Arduino Electronics Blueprints, we will see how a programmable logic controller (PLC) is used to operate various electronic and electromechanical devices that are wired to I/O wiring modules. The PLC receives signals from sensors, transducers, and electromechanical switches that are wired to its input wiring module and processes the electrical data by using a microcontroller. The embedded software that is stored in the microcontroller's memory can control external devices, such as electromechanical relays, motors (the AC and DC types), solenoids, and visual displays that are wired to its output wiring module. The PLC programmer programs the industrial computer by using a special programming language known as ladder logic. The PLC ladder logic is a graphical programming language that uses computer instruction symbols for automation and controls to operate robots, industrial machines, and conveyor systems. The PLC, along with the ladder logic software, is very expensive. However, with its off-the-shelf electronic components, Arduino can be used as an alternate mini industrial controller for Maker type robotics and machine control projects. In this article, we will see how Arduino can operate as a mini PLC that is capable of controlling a small electric DC motor with a simple two-step programming procedure. Details regarding how one can interface a transistor DC motor with a discrete digital logic circuit to an Arduino and write the control cursor selection code will be provided as well. This article will also provide the build instructions for a programmable motor controller. The LCD will provide the programming directions that are needed to operate an electric motor. The parts that are required to build a programmable motor controller are shown in the next section. Parts list The following list comprises the parts that are required to build the programmable motor controller: Arduino Uno: one unit 1 kilo ohm resistor (brown, black, red, gold): three units A 10-ohm resistor (brown, black, black, gold): one unit A 10-kilo ohm resistor (brown, black, orange, gold): one unit A 100-ohm resistor (brown, black, brown, gold): one unit A 0.01 µF capacitor: one unit An LCD module: one unit A 74LS08 Quad AND Logic Gate Integrated Circuit: one unit A 1N4001 general purpose silicon diode: one unit A DC electric motor (3 V rated): one unit Single-pole double-throw (SPDT) electric switches: two units 1.5 V batteries: two units 3 V battery holder: one unit A breadboard Wires A programmable motor controller block diagram The block diagram of the programmable DC motor controller with a Liquid Crystal Display (LCD) can be imagined as a remote control box with two slide switches and an LCD, as shown in following diagram:   The Remote Control Box provides the control signals to operate a DC motor. This box is not able to provide the right amount of electrical current to directly operate the DC motor. Therefore, a transistor motor driver circuit is needed. This circuit has sufficient current gain hfe to operate a small DC motor. A typical hfe value of 100 is sufficient for the operation of a DC motor. The Enable slide switch is used to set the remote control box to the ready mode. The Program switch allows the DC motor to be set to an ON or OFF operating condition by using a simple selection sequence. The LCD displays the ON or OFF selection prompts that help you operate the DC motor. The remote control box diagram is shown in the next image. The idea behind the concept diagram is to illustrate how a simple programmable motor controller can be built by using basic electrical and electronic components. The Arduino is placed inside the remote control box and wired to the Enable/Program switches and the LCD. External wires are attached to the transistor motor driver, DC motor, and Arduino. The block diagram of the programmable motor controller is an engineering tool that is used to convey a complete product design by using simple graphics. The block diagram also allows ease in planning the breadboard to prototype and test the programmable motor controller in a maker workshop or a laboratory bench. A final observation regarding the block diagram of the programmable motor controller is that the basic computer convention of inputs is on the left, the processor is located in the middle, and the outputs are placed on the right-hand side of the design layout. As shown, the SPDT switches are on the left-hand side, Arduino is located in the middle, and the transistor motor driver with the DC Motor is on the right-hand side of the block diagram. The LCD is shown towards the right of the block diagram because it is an output device. The LCD allows visual selection between the ON/OFF operations of the DC motor by using Program switch. This left-to-right design method allows ease in building the programmable motor controller as well as troubleshooting errors during the testing phase of the project. The block diagram for a programmable motor controller is as follows:   Building the programmable motor controller The block diagram of the programmable motor controller has more circuits than the block diagram of the sound effects machine. As discussed previously, there are a variety of ways to build the (prototype) electronic devices. For instance, they can be built on a Printed Circuit Board (PCB) or an experimenter's/prototype board. The construction base that was used to build this device was a solderless breadboard, which is shown in the next image. The placement of the electronic parts, as shown in the image, are not restricted to the solderless breadboard layout. Rather, it should be used as a guideline. Another method of placing the parts onto the solderless breadboard is to use the block diagram that was shown earlier. This method of arranging the parts that was illustrated in the block diagram allows ease in testing each subcircuit separately. For example, the Program/Enable SPDT switches' subcircuits can be tested by using a DC voltmeter. Placing a DC voltmeter across the Program switch and 1 kilo ohm resistor and toggling switch several times will show a voltage swing between 0 V and +5 V. The same testing method can be carried out on the Enable switch as well. The transistor motor driver circuit is tested by placing a +5 V signal on the base of the 2N3904 NPN transistor. When you apply +5 V to the transistor's base, the DC motor turns on. The final test for the programmable DC motor controller is to adjust the contrast control (10 kilo ohm) to see whether the individual pixels are visible on the LCD. This electrical testing method, which is used to check the programmable DC motor controller is functioning properly, will minimize the electronic I/O wiring errors. Also, the electrical testing phase ensures that all the I/O circuits of the electronics used in the circuit are working properly, thereby allowing the maker to focus on coding the software. Following is the wiring diagram of programmable DC motor controller with the LCD using a solderless breadboard:   As shown in the wiring diagram, the electrical components that are used to build the programmable DC motor controller with the LCD circuit are placed on the solderless breadboard for ease in wiring the Arduino, LCD, and the DC motor. The transistor shown in the preceding image is a 2N3904 NPN device with a pin-out arrangement consisting of an emitter, a base, and a collector respectively. If the transistor pins are wired incorrectly, the DC motor will not turn on. The LCD module is used as a visual display, which allows operating selection of the DC motor. The program slide switch turns the DC motor ON or OFF. Although most of the 16-pin LCD modules have the same electrical pin-out names, consult the manufacturer's datasheet of the available device in hand. There is also a 10 kilo ohm potentiometer to control the LCD's contrast. On wiring the LCD to the Arduino, supply power to the microcontroller board by using the USB cable that is connected to a desktop PC or a notebook. Adjust the 10 kilo ohm potentiometer until a row of square pixels are visible on the LCD. The Program slide switch is used to switch between the ON or OFF operating mode of the DC motor, which is shown on the LCD. The 74LS08 Quad AND gate is a 14-pin Integrated Circuit (IC) that is used to enable the DC motor or get the electronic controller ready to operate the DC motor. Therefore, the Program slide switch must be in the ON position for the electronic controller to operate properly. The 1N4001 diode is used to protect the 2N3904 NPN transistor from peak currents that are stored by the DC motor's winding while turning on the DC motor. When the DC motor is turned off, the 1N4001 diode will direct the peak current to flow through the DC motor's windings, thereby suppressing the transient electrical noise and preventing damage to the transistor. Therefore, it's important to include this electronic component into the design, as shown in the wiring diagram, to prevent electrical damage to the transistor. Besides the wiring diagram, the circuit's schematic diagram will aid in building the programmable motor controller device. Let's build it! In order to build the programmable DC motor controller, follow the following steps: Wire the programmable DC motor controller with the LCD circuit on a solderless breadboard, as shown in the previous image as well as the circuit's schematic diagram that is shown in the next image. Upload the software of the programmable motor controller to the Arduino by using the sketch shown next. Close both the Program and Enables switches. The motor will spin. When you open the Enable switch, the motor stops. The LCD message tells you how one can set the Program switch for an ON and OFF motor control. The Program switch allows you to select between the ON and OFF motor control functions. With the Program switch closed, toggling the Enable switch will turn the motor ON and OFF. Opening the Program switch will prevent the motor from turning on. The next few sections will explain additional details on the I/O interfacing of discrete digital logic circuits and a small DC motor that can be connected to the Arduino. A sketch is a unit of code that is uploaded to and run on an Arduino board. /* * programmable DC motor controller w/LCD allows the user to select ON and OFF operations using a slide switch. To * enable the selected operation another slide switch is used to initiate the selected choice. * Program Switch wired to pin 6. * Output select wired to pin 7. * LCD used to display programming choices (ON or OFF). * created 24 Dec 2012 * by Don Wilcher */ // include the library code: #include <LiquidCrystal.h>   // initialize the library with the numbers of the interface pins LiquidCrystal lcd(12, 11, 5, 4, 3, 2);   // constants won't change. They're used here to // set pin numbers: const int ProgramPin = 6;     // pin number for PROGRAM input control signal const int OUTPin = 7;     // pin number for OUTPUT control signal   // variable will change: int ProgramStatus = 0; // variable for reading Program input status     void setup() { // initialize the following pin as an output: pinMode(OUTPin, OUTPUT);   // initialize the following pin as an input: pinMode(ProgramPin, INPUT);   // set up the LCD's number of rows and columns: lcd.begin(16, 2);   // set cursor for messages andprint Program select messages on the LCD. lcd.setCursor(0,0); lcd.print( "1. ON"); lcd.setCursor(0, 1); lcd.print ( "2. OFF");   }   void loop(){ // read the status of the Program Switch value: ProgramStatus = digitalRead(ProgramPin);   // check if Program select choice is 1.ON. if(ProgramStatus == HIGH) {    digitalWrite(OUTPin, HIGH);   } else{      digitalWrite(OUTPin,LOW); } } The schematic diagram of the circuit that is used to build the programmable DC motor controller and upload the sketch to the Arduino is shown in the following image:   Interfacing a discrete digital logic circuit with Arduino The Enable switch, along with the Arduino, is wired to a discrete digital Integrated Circuit (IC) that is used to turn on the transistor motor driver. The discrete digital IC used to turn on the transistor motor driver is a 74LS08 Quad AND gate. The AND gate provides a high output signal when both the inputs are equal to +5 V. The Arduino provides a high input signal to the 74LS08 AND gate IC based on the following line of code:    digitalWrite(OUTPin, HIGH); The OUTPin constant name is declared in the Arduino sketch by using the following declaration statement: const int OUTPin = 7;     // pin number for OUTPUT control signal The Enable switch is also used to provide a +5V input signal to the 74LS08 AND gate IC. The Enable switch circuit schematic diagram is as follows:   Both the inputs must have the value of logic 1 (+5 V) to make an AND logic gate produce a binary 1 output. In the following section, the truth table of an AND logic gate is given. The table shows all the input combinations along with the resultant outputs. Also, along with the truth table, the symbol for an AND logic gate is provided. A truth table is a graphical analysis tool that is used to test digital logic gates and circuits. By setting the inputs of a digital logic gate to binary 1 (5 V) or binary 0 (0 V), the truth table will show the binary output values of 1 or 0 of the logic gate. The truth table of an AND logic gate is given as follows:   Another tool that is used to demonstrate the operation of the digital logic gates is the Boolean Logic expression. The Boolean Logic expression for an AND logic gate is as follows:   A Boolean Logic expression is an algebraic equation that defines the operation of a logic gate. As shown for the AND gate, the Boolean Logic expression circuit's output, which is denoted by C, is only equal to the product of A and B inputs. Another way of observing the operation of the AND gate, based on its Boolean Logic Expression, is by setting the value of the circuit's inputs to 1. Its output has the same binary bit value. The truth table graphically shows the results of the Boolean Logic expression of the AND gate. A common application of the AND logic gate is the Enable circuit. The output of the Enable circuit will only be turned on when both the inputs are on. When the Enable circuit is wired correctly on the solderless breadboard and is working properly, the transistor driver circuit will turn on the DC motor that is wired to it. The operation of the programmable DC motor controller's Enable circuit is shown in the following truth table:   The basic computer circuit that makes the decision to operate the DC motor is the AND logic gate. The previous schematic diagram of the Enable Switch circuit shows the electrical wiring to the specific pins of the 74LS08 IC, but internally, the AND logic gate is the main circuit component for the programmable DC motor controller's Enable function. Following is the diagram of 74LS08 AND Logic Gate IC:   To test the Enable circuit function of the programmable DC motor controller, the Program switch is required. The schematic diagram of the circuit that is required to wire the Program Switch to the Arduino is shown in the following diagram. The Program and Enable switch circuits are identical to each other because two 5 V input signals are required for the AND logic gate to work properly. The Arduino sketch that was used to test the Enable function of the programmable DC motor is shown in the following diagram:   The program for the discrete digital logic circuit with an Arduino is as follows: // constants won't change. They're used here to // set pin numbers: const int ProgramPin = 6;   // pin number for PROGRAM input control signal const int OUTPin = 7;     // pin number for OUTPUT control signal   // variable will change: int ProgramStatus = 0;       // variable for reading Program input status   void setup() { // initialize the following pin as an output: pinMode(OUTPin, OUTPUT);   // initialize the following pin as an input: pinMode(ProgramPin, INPUT); }   void loop(){   // read the status of the Program Switch value: ProgramStatus = digitalRead(ProgramPin);   // check if Program switch is ON. if(ProgramStatus == HIGH) {    digitalWrite(OUTPin, HIGH);   } else{      digitalWrite(OUTPin,LOW);   } } Connect a DC voltmeter's positive test lead to the D7 pin of the Arduino. Upload the preceding sketch to the Arduino and close the Program and Enable switches. The DC voltmeter should approximately read +5 V. Opening the Enable switch will display 0 V on the DC voltmeter. The other input conditions of the Enable circuit can be tested by using the truth table of the AND Gate that was shown earlier. Although the DC motor is not wired directly to the Arduino, by using the circuit schematic diagram shown previously, the truth table will ensure that the programmed Enable function is working properly. Next, connect the DC voltmeter to the pin 3 of the 74LS08 IC and repeat the truth table test again. The pin 3 of the 74LS08 IC will only be ON when both the Program and Enable switches are closed. If the AND logic gate IC pin generates wrong data on the DC voltmeter when compared to the truth table, recheck the wiring of the circuit carefully and properly correct the mistakes in the electrical connections. When the corrections are made, repeat the truth table test for proper operation of the Enable circuit. Interfacing a small DC motor with a digital logic gate The 74LS08 AND Logic Gate IC provides an electrical interface between reading the Enable switch trigger and the Arduino's digital output pin, pin D7. With both the input pins (1 and 2) of the 74LS08 AND logic gate set to binary 1, the small 14-pin IC's output pin 3 will be High. Although the logic gate IC's output pin has a +5 V source present, it will not be able to turn a small DC motor. The 74LS08 logic gate's sourcing current is not able to directly operate a small DC motor. To solve this problem, a transistor is used to operate a small DC motor. The transistor has sufficient current gain hfe to operate the DC motor. The DC motor will be turned on when the transistor is biased properly. Biasing is a technique pertaining to the transistor circuit, where providing an input voltage that is greater than the base-emitter junction voltage (VBE) turns on the semiconductor device. A typical value for VBE is 700 mV. Once the transistor is biased properly, any electrical device that is wired between the collector and +VCC (collector supply voltage) will be turned on. An electrical current will flow from +VCC through the DC motor's windings and the collector-emitter is grounded. The circuit that is used to operate a small DC motor is called a Transistor motor driver, and is shown in the following diagram:   The Arduino code that is responsible for the operation of the transistor motor driver circuit is as follows: void loop(){   // read the status of the Program Switch value: ProgramStatus = digitalRead(ProgramPin);   // check if Program switch is ON. if(ProgramStatus == HIGH) {    digitalWrite(OUTPin, HIGH);   } else{      digitalWrite(OUTPin,LOW);   } } Although the transistor motor driver circuit was not directly wired to the Arduino, the output pin of the microcontroller prototyping platform indirectly controls the electromechanical part by using the 74LS08 AND logic gate IC. A tip to keep in mind when using the transistors is to ensure that the semiconductor device can handle the current requirements of the DC motor that is wired to it. If the DC motor requires more than 500 mA of current, consider using a power Metal Oxide Semiconductor Field Effect Transistor (MOSFET) instead. A power MOSFET device such as IRF521 (N-Channel) and 520 (N-Channel) can handle up to 1 A of current quite easily, and generates very little heat. The low heat dissipation of the power MOSFET (PMOSFET) makes it more ideal for the operation of high-current motors than a general-purpose transistor. A simple PMOSFET DC motor driver circuit can easily be built with a handful of components and tested on a solderless breadboard, as shown in the following image. The circuit schematic for the solderless breadboard diagram is shown after the breadboard image as well. Sliding the Single Pole-Double Throw (SPDT)switch in one position biases the PMOSFET and turns on the DC motor. Sliding the switch in the opposite direction turns off the PMOSFET and the DC motor.   Once this circuit has been tested on the solderless breadboard, replace the 2N3904 transistor in the programmable DC Motor controller project with the power-efficient PMOSFET component mentioned earlier. As an additional reference, the schematic diagram of the transistor relay driver circuit is as follows:   A sketch of the LCD selection cursor The LCD provides a simple user interface for the operation of a DC motor that is wired to the Arduino-based programmable DC motor controller. The LCD provides the two basic motor operations of ON and OFF. Although the LCD shows the two DC motor operation options, the display doesn't provide any visual indication of selection when using the Program switch. An enhancement feature of the LCD is that it shows which DC motor operation has been selected by adding a selection symbol. The LCD selection feature provides a visual indicator of the DC motor operation that was selected by the Program switch. This selection feature can be easily implemented for the programmable DC motor controller LCD by adding a > symbol to the Arduino sketch. After uploading the original sketch from the Let's build it section of this article, the LCD will display two DC motor operation options, as shown in the following image:   The enhancement concept sketch of the new LCD selection feature is as follows:   The selection symbol points to the DC motor operation that is based on the Program switch position. (For reference, see the schematic diagram of the programmable DC motor controller circuit.) The partially programmable DC motor controller program sketch that comes without an LCD selection feature Comparing the original LCD DC motor operation selection with the new sketch, the differences with regard to the programming features are as follows: void loop(){   // read the status of the Program Switch value: ProgramStatus = digitalRead(ProgramPin);   // check if Program switch is ON. if(ProgramStatus == HIGH) {    digitalWrite(OUTPin, HIGH);   } else{    digitalWrite(OUTPin,LOW);   } } The partially programmable DC motor controller program sketch with an LCD selection feature This code feature will provide a selection cursor on the LCD to choose the programmable DC motor controller operation mode: // set cursor for messages and print Program select messages on the LCD. lcd.setCursor(0,0); lcd.print( ">1.Closed(ON)"); lcd.setCursor(0, 1); lcd.print ( ">2.Open(OFF)");     void loop(){   // read the status of the Program Switch value: ProgramStatus = digitalRead(ProgramPin);   // check if Program select choice is 1.ON. if(ProgramStatus == HIGH) {    digitalWrite(OUTPin, HIGH);      lcd.setCursor(0,0);      lcd.print( ">1.Closed(ON)");      lcd.setCursor(0,1);      lcd.print ( " 2.Open(OFF) ");   } else{      digitalWrite(OUTPin,LOW);      lcd.setCursor(0,1);      lcd.print ( ">2.Open(OFF)");      lcd.setCursor(0,0);      lcd.print( " 1.Closed(ON) ");   } } The most obvious difference between the two partial Arduino sketches is that the LCD selection feature has several lines of code as compared to the original one. As the slide position of the Program switch changes, the LCD's selection symbol instantly moves to the correct operating mode. Although the DC motor can be observed directly, the LCD confirms the operating mode of the electromechanical device. The complete LCD selection sketch is shown in the following section. As a design-related challenge, try displaying an actual arrow for the DC motor operating mode on the LCD. As illustrated in the sketch, an arrow can be built by using the keyboard symbols or the American Standard Code for Information Interchange (ASCII) code. /* * programmable DC motor controller w/LCD allows the user to select ON and OFF operations using a slide switch. To * enable the selected operation another slide switch is used to initiate the selected choice. * Program Switch wired to pin 6. * Output select wired to pin 7. * LCD used to display programming choices (ON or OFF) with selection arrow. * created 28 Dec 2012 * by Don Wilcher */ // include the library code: #include <LiquidCrystal.h>   // initialize the library with the numbers of the interface pins LiquidCrystal lcd(12, 11, 5, 4, 3, 2);   // constants won't change. They're used here to // set pin numbers: const int ProgramPin = 6;     // pin number for PROGRAM input control signal const int OUTPin = 7;       // pin number for OUTPUT control signal     // variable will change: int ProgramStatus = 0;       // variable for reading Program input status     void setup() { // initialize the following pin as an output: pinMode(OUTPin, OUTPUT);   // initialize the following pin as an input: pinMode(ProgramPin, INPUT);   // set up the LCD's number of rows and columns: lcd.begin(16, 2);   // set cursor for messages andprint Program select messages on the LCD. lcd.setCursor(0,0); lcd.print( ">1.Closed(ON)"); lcd.setCursor(0, 1); lcd.print ( ">2.Open(OFF)");   }   void loop(){   // read the status of the Program Switch value: ProgramStatus = digitalRead(ProgramPin);   // check if Program select choice is 1.ON. if(ProgramStatus == HIGH) {    digitalWrite(OUTPin, HIGH);      lcd.setCursor(0,0);      lcd.print( ">1.Closed(ON)");      lcd.setCursor(0,1);      lcd.print ( " 2.Open(OFF) ");   } else{      digitalWrite(OUTPin,LOW);      lcd.setCursor(0,1);      lcd.print ( ">2.Open(OFF)");      lcd.setCursor(0,0);      lcd.print( " 1.Closed(ON) ");   } } Congratulations on building your programmable motor controller device! Summary In this article, a programmable motor controller was built using an Arduino, AND gate, and transistor motor driver. The fundamentals of digital electronics, which include the concepts of Boolean logic expressions and truth tables were explained in the article. The AND gate is not able to control a small DC motor because of the high amount of current that is needed to operate it properly. PMOSFET (IRF521) is able to operate a small DC motor because of its high current sourcing capability. The circuit that is used to wire a transistor to a small DC motor is called a transistor DC motor driver. The DC motor can be turned on or off by using the LCD cursor selection feature of the programmable DC motor controller. Resources for Article: Further resources on this subject: Arduino Development [article] Prototyping Arduino Projects using Python [article] The Arduino Mobile Robot [article]
Read more
  • 0
  • 0
  • 13261
article-image-persisting-data-local-storage-ionic
Troy Miles
22 Jul 2015
5 min read
Save for later

Persisting Data to Local Storage in Ionic

Troy Miles
22 Jul 2015
5 min read
Users expect certain things to simply work in every mobile app. And if they don't work as expected, users delete your app and even worse they will probably give it a bad rating. Settings are one of those things users expect to simply work. Whenever a user makes a change to your app's setting page, they expect that the rest of the app will pick up that change and that those changes will be correctly persisted so that whenever they use the app again, the changes they made will be remembered. There is nothing too difficult about getting persistence to work correctly in an Ionic app, but there are a few road bumps that this post can help you to avoid. For an example, we will use the Ionic side menu starter template and add a settings page to it (beta 14 of Ionic was used for this post). There is nothing special about the settings page, in fact, settings can be persisted from anywhere in the application. The settings page just gives a nice place to play with things. And we can see how to persist values as they are changed by the user. The first part of our settings strategy is that we will keep all of our individual settings in the Settings object. This is a personal preference of mine. My app always serializes and deserializes all of the individual properties of the Settings object. Anytime I want something persisted, I add it to the Settings object and the system takes care of the rest. Next, we use an Angular Value object to hold the settings. A value is one of Angular's providers, like factories, services, providers, and constants. And unlike its cousin, the constant, values can be changed. So a value gives us a nice place to store our settings object and the values we put in it serve as the default values. angular.module('starter') .value("Settings", { searchRadius: {value: "5"}, acceptsPushNotification: true, hasUserSeenMessage: false, pagesPerLoad: 20 }); At the base of settings strategy is the HTML5 local storage. Local storage gives web apps a nice place to store information in string based key value pairs. If you're wondering how we get types besides strings into storage, wonder no more. Part of the magic and the reason why it is nice to keep everything in a single object is that we are going to convert that single object to and from a string using JSON. Inside of the file, "localstorage-service.js" there are only two methods in the services API. The first is serializeSettings and the second is deserializeSettings. Both do exactly what their names imply. There is also an internal only method in Local Storage, checkLocalStorage. It is only used for diagnostic purposes, since it is only used to write where or not the device has local storage to the console log. The final thing that Local Storage does is call deserializeSettings once during its startup. This gives the settings object the last values stored. If there are no saved values, it uses the Settings object stored in values. One other bit of weirdness which should be explained is why we copy properties using angular extend instead of simply copying the whole object. If we ever modify the entire angular value object, it returns to the default values and our changes are lost. We could write our function to copy the properties, but angular includes extend which copies the properties exactly the way we need them. function deserializeSettings() { var newSettings, rawSettings = localStorage[settings]; if(rawSettings) { newSettings = JSON.parse(rawSettings); if (newSettings) { // use extend since it copies one property at a time angular.extend(Settings, newSettings); console.log("Settings restore"); } } } In the Settings controller we bind the values from our Settings object to widgets on the page. It is not an accident that we name the property the same thing on the $scope object as on the Settings object. This makes updating the property easier, if we access the object using JavaScript bracket notation, we can access both the $scope and Settings object at the same time. We use this in the onChange method which is called anytime the value of a widget is changed. All of the widgets call this onChange method. if (!Settings.hasUserSeenMessage) { Settings.hasUserSeenMessage = true; LocalStorageService.serializeSettings(); $ionicPopup.alert({ title: 'Hi There', template: '<div class="text-center">You are a first time user.</div>' }); } // set the initial values for the widgets $scope.searchRadius = Settings.searchRadius; $scope.acceptPush = Settings.acceptPush; // when a widget is changed, come here and update the setting object too $scope.onChange = function (type, value) { $scope[type] = value; Settings[type] = value; LocalStorageService.serializeSettings(); }; We also demonstrate how to persist values programmatically. The hasUserSeenMessage property is checked in the code. If the user hasn't seen our one time message, we set the value to true, persist the value to local storage, then display the message. Anytime you want to persist Settings, simply call LocalStorageService.serializeSettings. About the author Troy Miles, aka the Rockncoder, began writing games in assembly language for early computers like the Apple II, Vic20, C64, and the IBM PC over 35 years ago. Currently he fills his days writing web apps for a Southern California based automotive valuation and information company. Nights and weekends he can usually be found writing cool apps for mobile and web or teaching other developers how to do so. He likes to post interesting code nuggets on his blog: http://therockncoder.com and videos on his YouTube channel: https://www.youtube.com/user/rockncoder. He can be reached at rockncoder@gmail.com The complete code of this tutorial is in my GitHub repo at https://github.com/Rockncoder/settings. Now that you know one way to persist settings, there is no excuse for not giving users a settings page which persist data properly to the device.
Read more
  • 0
  • 0
  • 5329

article-image-configuring-freeswitch-webrtc
Packt
21 Jul 2015
12 min read
Save for later

Configuring FreeSWITCH for WebRTC

Packt
21 Jul 2015
12 min read
In the article written by Giovanni Maruzzelli, author of FreeSWITCH 1.6 Cookbook, we learn how WebRTC is all about security and encryption. Theye are not an afterthought. They're intimately interwoven at the design level and are mandatory. For example, you cannot stream audio or video clearly (without encryption) via WebRTC. (For more resources related to this topic, see here.) Getting ready To start with this recipe, you need certificates. These are the same kind of certificates used by web servers for SSL-HTTPS. Yes, you can be your own Certification Authority and self-sign your own certificate. However, this will add considerable hassles; browsers will not recognize the certificate, and you will have to manually instruct them to make a security exception and accept it, or import your own Certification Authority chain to the browser. Also, in some mobile browsers, it is not possible to import self-signed Certification Authorities at all. The bottom line is that you can buy an SSL certificate for less than $10, and in 5 minutes. (No signatures, papers, faxes, telephone calls… nothing is required. Only a confirmation email and a few bucks are enough.) It will save you much frustration, and you'll be able to cleanly showcase your installation to others. The same reasoning applies to DNS Full Qualified Domain Names (FQDN)—certificates belonging to FQDN's. You can put your DNS names in /etc/hosts, or set up an internal DNS server, but this will not work for mobile clients and desktops outside your control. You can register a domain, point an fqdn to your machine's public IP (it can be a Linode, an AWS VM, or whatever), and buy a certificate using that fqdn as Common Name (CN). Don't try to set up the WebRTC server on your internal LAN behind the same NAT that your clients are into (again, it is possible but painful). How to do it... Once you have obtained your certificate (be sure to download the Certification Authority Chain too, and keep your Private Key; you'll need it), you must concatenate those three elements to create the needed certificates for mod_sofia to serve SIP signaling via WSS and media via SRTP/DTLS. With certificates in the right place, you can now activate ssl in Sofia. Open /usr/local/freeswitch/conf/vars.xml: As you can see, in the default configuration, both lines that feature SSL are false. Edit them both to change them to true. How it works... By default, Sofia will listen on port 7443 for WSS clients. You may want to change this port if you need your clients to traverse very restrictive firewalls. Edit /usr/local/freeswitch/conf/sip-profiles/internal.xml and change the "wss-binding" value to 443. This number, 443, is the HTTPS (SSL) port, and is almost universally open in all firewalls. Also, wss traffic is indistinguishable from https/ssl traffic, so your signaling will pass through the most advanced Deep Packet Inspection. Remember that if you use port 443 for WSS, you cannot use that same port for HTTPS, so you will need to deploy your secure web server on another machine. There's more... A few examples of such a configuration are certificates, DNS, and STUN/TURN. Generally speaking, if you set up with real DNS names, you will not need to run your own STUN server; your clients can rely on Google STUN servers. But if you need a TURN server because some of your clients need a media relay (which is because they're behind and demented NAT got UDP blocked by zealous firewalls), install on another machine rfc5766-turn-server, and have it listen on TCP ports 443 and 80. You can also put certificates with it and use TURNS on encrypted connection. The same firewall piercing properties as per signaling. SIP signaling in JavaScript with SIP.js (WebRTC client) Let's carry out the most basic interaction with a web browser audio/video through WebRTC. We'll start using SIP.js, which uses a protocol very familiar to all those who are old hands at VoIP. A web page will display a click-to-call button, and anyone can click for inquiries. That call will be answered by our company's PBX and routed to our employee extension (1010). Our employee will wait on a browser with the "answer" web page open, and will automatically be connected to the incoming call (if our employee does not answer, the call will go to their voicemail). Getting ready To use this example, download version 0.7.0 of the SIP.js JavaScript library from www.sipjs.com. We need an "anonymous" user that we can allow into our system without risks, that is, a user that can do only what we have preplanned. Create an anonymous user for click-to-call in a file named /usr/local/freeswitch/conf/directory/default/anonymous.xml : <include> <user id="anonymous">    <params>      <param name="password" value="welcome"/>    </params>    <variables>      <variable name="user_context" value="anonymous"/>      <variable name="effective_caller_id_name" value="Anonymous"/>      <variable name="effective_caller_id_number" value="666"/>      <variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/>      <variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/>    </variables> </user> </include> Then add the user's own dialplan to /usr/local/freeswitch/conf/dialplan/anonymous.xml: <include> <context name="anonymous">    <extension name="public_extensions">      <condition field="destination_number" expression="^(10[01][0-9])$">        <action application="transfer" data="$1 XML default"/>      </condition>    </extension>    <extension name="conferences">      <condition field="destination_number" expression="^(36d{2})$">        <action application="answer"/>        <action application="conference" data="$1-${domain_name}@video-mcu"/>      </condition>    </extension>    <extension name="echo">      <condition field="destination_number" expression="^9196$">        <action application="answer"/>        <action application="echo"/>      </condition>    </extension> </context> </include> How to do it... In a directory served by your HTPS server (for example, Apache with an SSL certificate), put all the following files. Minimal click-to-call caller client HTML (call.html): <html> <body>        <button id="startCall">Start Call</button>        <button id="endCall">End Call</button>        <br/>        <video id="remoteVideo"></video>        <br/>        <video id="localVideo" muted="muted" width="128px" height="96px"></video>        <script src="js/sip-0.7.0.min.js"></script>        <script src="call.js"></script> </body> </html> JAVASCRIPT (call.js): var session;   var endButton = document.getElementById('endCall'); endButton.addEventListener("click", function () {        session.bye();        alert("Call Ended"); }, false);   var startButton = document.getElementById('startCall'); startButton.addEventListener("click", function () {        session = userAgent.invite('sip:1010@gmaruzz.org', options);        alert("Call Started"); }, false);   var userAgent = new SIP.UA({        uri: 'anonymous@gmaruzz.org',        wsServers: ['wss://self2.gmaruzz.org:7443'],        authorizationUser: 'anonymous',        password: 'welcome' });   var options = {        media: {                constraints: {                        audio: true,                        video: true                },                render: {                        remote: document.getElementById('remoteVideo'),                        local: document.getElementById('localVideo')                }        } }; Minimal callee HTML (answer.html): <html> <body>        <button id="endCall">End Call</button>        <br/>        <video id="remoteVideo"></video>        <br/>        <video id="localVideo" muted="muted" width="128px" height="96px"></video>        <script src="js/sip-0.7.0.min.js"></script>        <script src="answer.js"></script> </body> </html> JAVASCRIPT (answer.js): var session;   var endButton = document.getElementById('endCall'); endButton.addEventListener("click", function () {        session.bye();        alert("Call Ended"); }, false);   var userAgent = new SIP.UA({        uri: '1010@gmaruzz.org',        wsServers: ['wss://self2.gmaruzz.org:7443'],        authorizationUser: '1010',        password: 'ciaociao' });   userAgent.on('invite', function (ciapalo) {        session = ciapalo;        session.accept({                media: {                        constraints: {                               audio: true,                                video: true                        },                        render: {                                remote: document.getElementById('remoteVideo'),                                local: document.getElementById('localVideo')                        }                  }        }); }); How it works... Our employee (the callee, or the person who will answer the call) will sit tight with the answer.html web page open on their browser. Upon page load, JavaScript will have created the SIP agent and registered it with our FreeSWITCH server as SIP user "1010" (just as our employee was on their own regular SIP phone). Our customer (the caller, or the person who initiates the communication) will visit the call.html webpage (while loading, this web page will register as an SIP "anonymous" user with FreeSWITCH), and then click on the Start Call button. This clicking will activate the JavaScript that creates the communication session using the invite method of the user agent, passing as an argument the SIP address of our employee. The Invite method initiates a call, and our FreeSWITCH server duly invites SIP user 1010. That happens to be the answer.html web page our employee is in front of. The Invite method sent from FreeSWITCH to answer.html will activate the JavaScript local user agent, which will create the session and accept the call. At this moment, the caller and callee are connected, and voice and video will begin to flow back and forth. The received audio or video stream will be rendered by the RemoteVideo tag in the web page, while its own stream (the video that is sent to the peer) will show up locally in the little localVideo tag. That's muted not to generate Larsen whistles. See also The Configuring a SIP phone to register with FreeSWITCH recipe in Chapter 2, Connecting Telephones and Service Providers, and the documentation at http://sipjs.com/guides/.confluence/display/FREESWITCH/mod_verto Summary This article features the new disruptive technology that allows real-time audio/video/data-secure communication from hundreds of millions of browsers. FreeSWITCH is ready to serve as a gateway and an application server. Resources for Article: Further resources on this subject: WebRTC with SIP and IMS [article] Architecture of FreeSWITCH [article] Calling your fellow agents [article]
Read more
  • 0
  • 0
  • 27763
Modal Close icon
Modal Close icon