Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Mobile

213 Articles
article-image-article-debugging-with-opengles-in-ios5
Packt
25 Jan 2012
14 min read
Save for later

Debugging with OpenGL ES in iOS 5

Packt
25 Jan 2012
14 min read
(For more resources on Debugging with OpenGL ES in iOS 5, see here.) The Open Graphics Library (OpenGL) can be simply defned as a software interface to the graphics hardware. It is a 3D graphics and modeling library that is highly portable and extremely fast. Using the OpenGL graphics API, you can create some brilliant graphics that are capable of representing 2D and 3D data.   The OpenGL library is a multi-purpose, open-source graphics library that supports applications for 2D and 3D digital content creation, mechanical and architectural design, virtual prototyping, fight simulation, and video games, and allows application developers to confgure a 3D graphics pipeline, and submit data to it. An object is defned by connected vertices. The vertices of the object are then transformed, lit, and assembled into primitives, and rasterized to create a 2D image that can be directly sent to the underlying graphics hardware to render the drawing, which is deemed to be typically very fast, due to the hardware being dedicated to processing graphics commands. We have some fantastic stuff to cover in this article, so let's get started. Understanding the new workfow feature within Xcode In this section, we will be taking a look at the improvements that have been made to the Xcode 4 development environment, and how this can enable us to debug OpenGL ES applications much easier, compared to the previous versions of Xcode. We will look at how we can use the frame capture feature of the debugger to capture all frame objects that are included within an OpenGL ES application. This tool enables you to list all the frame objects that are currently used by your application at a given point of time. We will familiarize ourselves with the new OpenGL ES debugger within Xcode, to enable us to track down specifc issues relating to OpenGL ES within the code. Creating a simple project to debug an OpenGL ES application Before we can proceed, we frst need to create our OpenGLESExample project. Launch Xcode from the /Developer/Applications folder. Select the OpenGL Game template from the Project template dialog box. Then, click on the Next button to proceed to the next step in the wizard. This will allow you to enter in the Product Name and your Company Identifer. Enter in OpenGLESExample for the Product Name, and ensure that you have selected iPhone from the Device Family dropdown box. Next, click on the Next button to proceed to the fnal step in the wizard. Choose the folder location where you would like to save your project. Then, click on the Create button to save your project at the location specifed. Once your project has been created, you will be presented with the Xcode development interface, along with the project fles that the template created for you within the Project Navigator window. Now that we have our project created, we need to confgure our project to enable us to debug the state of the objects. Detecting OpenGL ES state information and objects To enable us to detect and monitor the state of the objects within our application, we need to enable this feature through the Edit Scheme… section of our project, as shown in the following screenshot: From the Edit Scheme section, as shown in the following screenshot, select the Run OpenGLESExampleDebug action, then click on the Options tab, and then select the OpenGL ES Enable frame capture checkbox. For this feature to work, you must run the application on an iOS device, and the device must be running iOS 5.0 or later. This feature will not work within the iOS simulator. You will need to ensure that after you have attached your device, you will then need to restart Xcode for this option to become available. When you have confgured your project correctly, click on the OK button to accept the changes made, and close the dialog box. Next, build and run your OpenGL ES application. When you run your application, you will see two three-dimensional and colored box cubes. When you run your application on the iOS device, you will notice that the frame capture appears within the Xcode 4 debug bar, as shown in the following screenshot: When using the OpenGL ES features of Xcode 4.2, these debugging features enable you to do the following: Inspect OpenGL ES state information. Introspect OpenGL ES objects such as view textures and shaders. Step through draw calls and watch changes with each call. Step through the state calls that proceed each draw call to see exactly how the image is constructed. The following screenshot displays the captured frame of our sample application. The debug navigator contains a list of every draw call and state call associated with that particular frame. The buffers that are associated with the frame are shown within the editor pane, and the state information is shown in the debug windowpane. The default view when the OpenGL ES frame capture is launched is displayed in the Auto view. This view displays the color portion, which is the Renderbuffer #1, as well as its grayscale equivalent of the image, that being Renderbuffer #2. You can also toggle the visibility between each of the channels for red, green and blue, as well as the alpha channels, and then use the Range scroll to adjust the color range. This can be done easily by selecting each of the cog buttons, shown in the previous screenshot. You also have the ability to step through each of the draw calls in the debug navigator, or by using the double arrows and slider in the debug bar. When using the draw call arrows or sliders, you can have Xcode select the stepped-to draw call from the debug navigator. This can be achieved by Control + clicking below the captured frame, and choosing the Reveal in Debug Navigator from the shortcut menu. You can also use the shortcut menu to toggle between the standard view of drawing the image, as well as showing the wireframe view of the object, by selecting the Show Wireframe option from the pop-up menu, as shown in the previous screenshot. When using the wireframe view of an object, it highlights the element that is being drawn by the selected draw call. To turn off the wireframe feature and have the image return back to the normal state, select the Hide Wireframe option from the pop-up menu, as shown in the following screenshot: Now that you have a reasonable understanding of debugging through an OpenGL ES application and its draw calls, let's take a look at how we can view the textures associated with an OpenGL ES application. View textures When referring to textures in OpenGL ES 2.0, this is basically an image that can be sampled by the graphics engine pipeline, and is used to map a colored image onto a mapping surface. To view objects that have been captured by the frame capture button, follow these simple steps: Open the Assistant Editor to see the objects associated with the captured frame. In this view, you can choose to see all of the objects, only bound objects, or the stack. This can be accessed from the View | Assistant Editor | Show Assistant Editor menu, as shown in the following screenshot: Open a secondary assistant editor pane, so that you can see both the objects and the stack frame at the same time. This can be accessed from the View | Assistant Editor | Add Assistant Editor menu shown previously, or by clicking on the + symbol, as shown in the following screenshot: To see details about any object contained within the OpenGL ES assistant editor, double-click on the object, or choose the item from the pop-up list, as shown in the next screenshot. It is worth mentioning that, from within this view, you have the ability to change the orientation of any object that has been captured and has been rendered to the view. To change the orientation, locate the Orientation options shown at the bottom- right hand of the screen. Objects can be changed to appear in one or more views as needed, and these are as follows: Rotate clockwise Rotate counter-clockwise Flip orientation vertically Flip orientation horizontally For example, if you want to see information about the vertex array object (VAO), you would double-click on it to see it in more detail, as shown in the following screenshot. This displays all the X, Y, and Z-axes required to construct each of our objects. Next, we will take a look into how shaders are constructed. Shaders There are two types of shaders that you can write for OpenGL ES; these are Vertex shaders and Fragment shaders. These two shaders make up what is known as the Programmable portion of the OpenGL ES 2.0 programmable pipeline, and are written in a C-like language syntax, called the OpenGL ES Shading Language (GLSL). The following screenshot outlines the OpenGL ES 2.0 programmable pipeline, and combines a version of the OpenGL Shading Language for programming Vertex Shader and Fragment Shader that has been adapted for embedded platforms for iOS devices: Shaders are not new, these have been used in a variety of games that use OpenGL. Such games that come to mind are: Doom 3 and Quake 4, or several fight simulators, such as Microsoft's Flight Simulator X. Once thing to note about shaders, is that they are not compiled when your application is built. The source code of the shader gets stored within your application bundle as a text fle, or defned within your code as a string literal, that is, vertShaderPathname = [[NSBundlemainBundle] pathForResource:@"Shader" ofType:@"vsh"];   Before you can use your shaders, your application has to load and compile each of them. This is done to preserve device independence. Let's take for example, if Apple decided to change to a different GPU manufacturer, for future releases of its iPhone, the compiled shaders may not work on the new GPU. Having your application deferring the compilation to runtime will avoid this problem, and any latest versions of the GPU will be fully supported without a need for you to rebuild your application. The following table explains the differences between the two shaders. Shader type Description Vertex shaders These are programs that get called once-per-vertex in your scene. An example to explain this better would be - if you were rendering a simple scene with a single square, with one vertex at each corner, this would be called four times. Its job is to perform some calculations such as lighting, geometry transforms, moving, scaling and rotating of objects, to simulate realism. Fragment shaders These are programs that get called once-per-pixel in your scene. So, if you're rendering that same simple scene with a single square, it will be called once for each pixel that the square covers. Fragment shaders can also perform lighting calculations, and so on, but their most important job is to set the final color for the pixel. Next, we will start by examining the implementation of the vertex shader that the OpenGL template created for us. You will notice that these shaders are code fles that have been implemented using C-Syntax like instructions. Lets, start by examining each section of the vertex shader fle, by following these simple steps: Open the Shader.vsh vertex shader fle located within the OpenGLESExample folder of the Project Navigator window, and examine the following code snippet. attribute vec4 position; attribute vec3 normal; varyinglowp vec4 colorVarying; uniform mat4 modelViewProjectionMatrix; uniform mat3 normalMatrix; void main(){   vec3eyeNormal = normalize(normalMatrix * normal);   vec3lightPosition = vec3(0.0, 0.0, 1.0);   vec4diffuseColor = vec4(0.4, 0.4, 1.0, 1.0);   floatnDotVP = max(0.0, dot(eyeNormal, normalize(lightPosition)));   colorVarying = diffuseColor * nDotVP;   gl_Position = modelViewProjectionMatrix * position; } Next, we will take a look at what this piece of code is doing and explain what is actually going on. So let's start. The attribute keyword declares that this shader is going to be passed in an input variable called position. This will be used to indicate the position of the vertex. You will notice that the position variable has been declared of type vec4, which means that each vertex contains four foating-point values. The second attribute input variable that is declared with the variable name normal, has been declared of type vec3, which means that the vertex con- tains three foating-point values that are used for the rotational aspect around the x, y, and z axes. The third attribute input variable that is declared with the variable name diffuseColor, defnes the color to be used for the vertex. We declare an- other variable called colorVarying. You will notice that it doesn't contain the attribute keyword. This is because it is an output variable that will be passed to the fragment shader. The varying keyword tells us the value for a particular vertex. This basically means that you can specify a different color for each vertex, and it will make all the values in-between a neat gradient that you will see in the fnal output. We have declared this as vec4, because colors are comprised of four compo- nent values. Finally, we declare two uniform keyword variables called modelViewProjectionMatrix and normalMatrix. The model, view, and projection matrices are three separate matrices. Model maps from an object's local coordinate space into world space, view from world space to camera space, and projection from camera to screen. When all three are used, you can then use the one result to map all the way from object space to screen space, enabling you to work out what you need to pass on to the next stage of a programmable pipeline from the incoming vertex positions. The normal matrix vectors are used to determine how much light is received at the specifed vertex or surface. Uniforms are a second form of data that al- low you to pass from your application code to the shaders. Uniform types are available to both vertex and fragment shaders, which, unlike attributes, are only available to the vertex shader. The value of a uniform cannot be changed by the shaders, and will have the same value every time a shader runs for a given trip through the pipeline. Uniforms can also contain any kind of data that you want to pass along for use in your shader. Next, we assign the value from the color per-vertex attribute to the varying variable colorVarying. This value will then be available in the fragment shader in interpolated form. Finally, we modify the gl_Position output variable, using the foating point translate variable to move the vertex along the X, Y, and Z-axes, based on the value of the translate uniform. Next, we will take a look at the fragment shader that the OpenGL ES tem- plate created for us. Open the Shader.fsh fragment shader fle located within the OpenGLESExample folder of the Project Navigator window, and examine the following code snippet. varyinglowp vec4 colorVarying; void main(){   gl_FragColor = colorVarying; } We will now take a look at this code snippet, and explain what is actually going on here. You will notice that within the fragment shader, the declaration of the varying type variable colorVarying, as highlighted in the code, has the same name as it did in the vertex shader. This is very important; if these names were different, OpenGL ES won't realize it's the same variable, and your program will produce unexpected results. The type is also very important, and it has to be the same data type as it was declared within the vertex shader. This is a GLSL keyword that is used to specify the precision of the number of bytes used to represent a number. From a programming point of view, the more bytes that are used to represent a number, the fewer problems you will be likely to have with the rounding of foating point calculations. GLSL allows the user to precision modifers any time a variable is declared, and it must be declared within this fle. Failure to declare it within the fragment shader, will result in your shader failing to compile. The lowp keyword is going to give you the best performance with the least accuracy during interpolation. This is the better option when dealing with colors, where small rounding errors don't matter. Should you fnd the need to increase the precision, it is better to use the mediump or highp, if the lack of precision causes you problems within your application. For more information on the OpenGL ES Shading Language (GLSL) or the Precision modifers, refer to the following documentation located at: http://www.khronos.org/registry/ gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf.
Read more
  • 0
  • 0
  • 7109

article-image-appcelerator-titanium-creating-animations-transformations-and-understanding-drag-and-d
Packt
22 Dec 2011
10 min read
Save for later

Appcelerator Titanium: Creating Animations, Transformations, and Understanding Drag-and-drop

Packt
22 Dec 2011
10 min read
(For more resources related to this subject, see here.) Animating a View using the "animate" method Any Window, View, or Component in Titanium can be animated using the animate method. This allows you to quickly and confidently create animated objects that can give your applications the "wow" factor. Additionally, you can use animations as a way of holding information or elements off screen until they are actually required. A good example of this would be if you had three different TableViews but only wanted one of those views visible at any one time. Using animations, you could slide those tables in and out of the screen space whenever it suited you, without the complication of creating additional Windows. In the following recipe, we will create the basic structure of our application by laying out a number of different components and then get down to animating four different ImageViews. These will each contain a different image to use as our "Funny Face" character. Complete source code for this recipe can be found in the /Chapter 7/Recipe 1 folder. Getting ready To prepare for this recipe, open up Titanium Studio and log in if you have not already done so. If you need to register a new account, you can do so for free directly from within the application. Once you are logged in, click on New Project, and the details window for creating a new project will appear. Enter in FunnyFaces as the name of the app, and fill in the rest of the details with your own information. Pay attention to the app identifier, which is written normally in reverse domain notation (that is, com.packtpub.funnyfaces). This identifier cannot be easily changed after the project is created and you will need to match it exactly when creating provisioning profiles for distributing your apps later on. The first thing to do is copy all of the required images into an images folder under your project's Resources folder. Then, open the app.js file in your IDE and replace its contents with the following code. This code will form the basis of our FunnyFaces application layout. // this sets the background color of the master UIView Titanium.UI.setBackgroundColor('#fff');////create root window//var win1 = Titanium.UI.createWindow({ title:'Funny Faces', backgroundColor:'#fff'});//this will determine whether we load the 4 funny face//images or whether one is selected alreadyvar imageSelected = false;//the 4 image face objects, yet to be instantiatedvar image1;var image2;var image3;var image4;var imageViewMe = Titanium.UI.createImageView({ image: 'images/me.png', width: 320, height: 480, zIndex: 0 left: 0, top: 0, zIndex: 0, visible: false});win1.add(imageViewMe);var imageViewFace = Titanium.UI.createImageView({ image: 'images/choose.png', width: 320, height: 480, zIndex: 1});imageViewFace.addEventListener('click', function(e){ if(imageSelected == false){ //transform our 4 image views onto screen so //the user can choose one! }});win1.add(imageViewFace);//this footer will hold our save button and zoom slider objectsvar footer = Titanium.UI.createView({ height: 40, backgroundColor: '#000', bottom: 0, left: 0, zIndex: 2});var btnSave = Titanium.UI.createButton({ title: 'Save Photo', width: 100, left: 10, height: 34, top: 3});footer.add(btnSave);var zoomSlider = Titanium.UI.createSlider({ left: 125, top: 8, height: 30, width: 180});footer.add(zoomSlider);win1.add(footer);//open root windowwin1.open(); Build and run your application in the emulator for the first time, and you should end up with a screen that looks just similar to the following example: How to do it… Now, back in the app.js file, we are going to animate the four ImageViews which will each provide an option for our funny face image. Inside the declaration of the imageViewFace object's event handler, type in the following code: imageViewFace.addEventListener('click', function(e){ if(imageSelected == false){ //transform our 4 image views onto screen so //the user can choose one! image1 = Titanium.UI.createImageView({ backgroundImage: 'images/clown.png', left: -160, top: -140, width: 160, height: 220, zIndex: 2 }); image1.addEventListener('click', setChosenImage); win1.add(image1); image2 = Titanium.UI.createImageView({ backgroundImage: 'images/policewoman.png', left: 321, top: -140, width: 160, height: 220, zIndex: 2 }); image2.addEventListener('click', setChosenImage); win1.add(image2); image3 = Titanium.UI.createImageView({ backgroundImage: 'images/vampire.png', left: -160, bottom: -220, width: 160, height: 220, zIndex: 2 }); image3.addEventListener('click', setChosenImage); win1.add(image3); image4 = Titanium.UI.createImageView({ backgroundImage: 'images/monk.png', left: 321, bottom: -220, width: 160, height: 220, zIndex: 2 }); image4.addEventListener('click', setChosenImage); win1.add(image4); image1.animate({ left: 0, top: 0, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN }); image2.animate({ left: 160, top: 0, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_EASE_OUT }); image3.animate({ left: 0, bottom: 20, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN_OUT }); image4.animate({ left: 160, bottom: 20, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_LINEAR }); }}); Now launch the emulator from Titanium Studio and you should see the initial layout with our "Tap To Choose An Image" view visible. Tapping the choose ImageView should now animate our four funny face options onto the screen, as seen in the following screenshot: How it works… The first block of code creates the basic layout for our application, which consists of a couple of ImageViews, a footer view holding our "save" button, and the Slider control, which we'll use later on to increase the zoom scale of our own photograph. Our second block of code is where it gets interesting. Here, we're doing a simple check that the user hasn't already selected an image using the imageSelected Boolean, before getting into our animated ImageViews, named image1, image2, image3, and image4. The concept behind the animation of these four ImageViews is pretty simple. All we're essentially doing is changing the properties of our control over a period of time, defined by us in milliseconds. Here, we are changing the top and left properties of all of our images over a period of half a second so that we get an effect of them sliding into place on our screen. You can further enhance these animations by adding more properties to animate, for example, if we wanted to change the opacity of image1 from 50 percent to 100 percent as it slides into place, we could change the code to look something similar to the following: image1 = Titanium.UI.createImageView({ backgroundImage: 'images/clown.png', left: -160, top: -140, width: 160, height: 220, zIndex: 2, opacity: 0.5});image1.addEventListener('click', setChosenImage);win1.add(image1);image1.animate({ left: 0, top: 0, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN, opacity: 1.0}); Finally, the curve property of animate() allows you to adjust the easing of your animated component. Here, we used all four animation-curve constants on each of our ImageViews. They are: Titanium.UI.ANIMATION_CURVE_EASE_IN: Accelerate the animation slowly Titanium.UI.ANIMATION_CURVE_EASE_OUT: Decelerate the animation slowly Titanium.UI.ANIMATION_CURVE_EASE_IN_OUT: Accelerate and decelerate the animation slowly Titanium.UI.ANIMATION_CURVE_LINEAR: Make the animation speed constant throughout the animation cycles Animating a View using 2D matrix and 3D matrix transforms You may have noticed that each of our ImageViews in the previous recipe had a click event listener attached to them, calling an event handler named setChosenImage. This event handler is going to handle setting our chosen "funny face" image to the imageViewFace control. It will then animate all four "funny face" ImageView objects on our screen area using a number of different 2D and 3D matrix transforms. Complete source code for this recipe can be found in the /Chapter 7/Recipe 2 folder. How to do it… Replace the existing setChosenImage function, which currently stands empty, with the following source code: //this function sets the chosen image and removes the 4//funny faces from the screenfunction setChosenImage(e){ imageViewFace.image = e.source.backgroundImage; imageViewMe.visible = true; //create the first transform var transform1 = Titanium.UI.create2DMatrix(); transform1 = transform1.rotate(-180); var animation1 = Titanium.UI.createAnimation({ transform: transform1, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN_OUT }); image1.animate(animation1); animation1.addEventListener('complete',function(e){ //remove our image selection from win1 win1.remove(image1); }); //create the second transform var transform2 = Titanium.UI.create2DMatrix(); transform2 = transform2.scale(0); var animation2 = Titanium.UI.createAnimation({ transform: transform2, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN_OUT }); image2.animate(animation2); animation2.addEventListener('complete',function(e){ //remove our image selection from win1 win1.remove(image2); }); //create the third transform var transform3 = Titanium.UI.create2DMatrix(); transform3 = transform3.rotate(180); transform3 = transform3.scale(0); var animation3 = Titanium.UI.createAnimation({ transform: transform3, duration: 1000, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN_OUT }); image3.animate(animation3); animation3.addEventListener('complete',function(e){ //remove our image selection from win1 win1.remove(image3); }); //create the fourth and final transform var transform4 = Titanium.UI.create3DMatrix(); transform4 = transform4.rotate(200,0,1,1); transform4 = transform4.scale(2); transform4 = transform4.translate(20,50,170); //the m34 property controls the perspective of the 3D view transform4.m34 = 1.0/-3000; //m34 is the position at [3,4] //in the matrix var animation4 = Titanium.UI.createAnimation({ transform: transform4, duration: 1500, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN_OUT }); image4.animate(animation4); animation4.addEventListener('complete',function(e){ //remove our image selection from win1 win1.remove(image4); }); //change the status of the imageSelected variable imageSelected = true;} How it works… Again, we are creating animations for each of the four ImageViews, but this time in a slightly different way. Instead of using the built-in animate method, we are creating a separate animation object for each ImageView, before calling the ImageView's animate method and passing this animation object to it. This method of creating animations allows you to have finer control over them, including the use of transforms. Transforms have a couple of shortcuts to help you perform some of the most common animation types quickly and easily. The image1 and image2 transforms, as shown in the previous code, use the rotate and scale methods respectively. Scale and rotate in this case are 2D matrix transforms, meaning they only transform the object in two-dimensional space along its X-axis and Y-axis. Each of these transformation types takes a single integer parameter; for scale, it is 0-100 percent and for rotate, the number of it is 0-360 degrees. Another advantage of using transforms for your animations is that you can easily chain them together to perform a more complex animation style. In the previous code, you can see that both a scale and a rotate transform are transforming the image3 component. When you run the application in the emulator or on your device, you should notice that both of these transform animations are applied to the image3 control! Finally, the image4 control also has a transform animation applied to it, but this time we are using a 3D matrix transform instead of the 2D matrix transforms used for the other three ImageViews. These work the same way as regular 2D matrix transforms, except that you can also animate your control in 3D space, along the Z-axis. It's important to note that animations have two event listeners: start and complete. These event handlers allow you to perform actions based on the beginning or ending of your animation's life cycle. As an example, you could chain animations together by using the complete event to add a new animation or transform to an object after the previous animation has finished. In our previous example, we are using this complete event to remove our ImageView from the Window once its animation has finished.
Read more
  • 0
  • 0
  • 7077

article-image-updating-data-background
Packt
20 May 2014
4 min read
Save for later

Updating data in the background

Packt
20 May 2014
4 min read
(For more resources related to this topic, see here.) Getting ready Create a new Single View Application in Xamarin Studio and name it BackgroundFetchApp. Add a label to the controller. How to do it... Perform the following steps: We need access to the label from outside of the scope of the BackgroundFetchAppViewController class, so create a public property for it as follows: public UILabel LabelStatus { get { return this.lblStatus; } } Open the Info.plist file and under the Source tab, add the UIBackgroundModes key (Required background modes) with the string value, fetch. The following screenshot shows you the editor after it has been set: In the FinishedLaunching method of the AppDelegate class, enter the following line: UIApplication.SharedApplication.SetMinimumBackgroundFetchInterval(UIApplication.BackgroundFetchIntervalMinimum); Enter the following code, again, in the AppDelegate class: private int updateCount;public override void PerformFetch (UIApplication application,Action<UIBackgroundFetchResult> completionHandler){try {HttpWebRequest request = WebRequest.Create("http://software.tavlikos.com") as HttpWebRequest;using (StreamReader sr = new StreamReader(request.GetResponse().GetResponseStream())) {Console.WriteLine("Received response: {0}",sr.ReadToEnd());}this.viewController.LabelStatus.Text =string.Format("Update count: {0}/n{1}",++updateCount, DateTime.Now);completionHandler(UIBackgroundFetchResult.NewData);} catch {this.viewController.LabelStatus.Text =string.Format("Update {0} failed at {1}!",++updateCount, DateTime.Now);completionHandler(UIBackgroundFetchResult.Failed);}} Compile and run the app on the simulator or on the device. Press the home button (or Command + Shift + H) to move the app to the background and wait for an output. This might take a while, though. How it works... The UIBackgroundModes key with the fetch value enables the background fetch functionality for our app. Without setting it, the app will not wake up in the background. After setting the key in Info.plist, we override the PerformFetch method in the AppDelegate class, as follows: public override void PerformFetch (UIApplication application, Action<UIBackgroundFetchResult> completionHandler) This method is called whenever the system wakes up the app. Inside this method, we can connect to a server and retrieve the data we need. An important thing to note here is that we do not have to use iOS-specific APIs to connect to a server. In this example, a simple HttpWebRequest method is used to fetch the contents of this blog: http://software.tavlikos.com. After we have received the data we need, we must call the callback that is passed to the method, as follows: completionHandler(UIBackgroundFetchResult.NewData); We also need to pass the result of the fetch. In this example, we pass UIBackgroundFetchResult.NewData if the update is successful and UIBackgroundFetchResult.Failed if an exception occurs. If we do not call the callback in the specified amount of time, the app will be terminated. Furthermore, it might get fewer opportunities to fetch the data in the future. Lastly, to make sure that everything works correctly, we have to set the interval at which the app will be woken up, as follows: UIApplication.SharedApplication.SetMinimumBackgroundFetchInterval(UIApplication.BackgroundFetchIntervalMinimum); The default interval is UIApplication.BackgroundFetchIntervalNever, so if we do not set an interval, the background fetch will never be triggered. There's more Except for the functionality we added in this project, the background fetch is completely managed by the system. The interval we set is merely an indication and the only guarantee we have is that it will not be triggered sooner than the interval. In general, the system monitors the usage of all apps and will make sure to trigger the background fetch according to how often the apps are used. Apart from the predefined values, we can pass whatever value we want in seconds. UI updates We can update the UI in the PerformFetch method. iOS allows this so that the app's screenshot is updated while the app is in the background. However, note that we need to keep UI updates to the absolute minimum. Summary Thus, this article covered the things to keep in mind to make use of iOS 7's background fetch feature. Resources for Article: Further resources on this subject: Getting Started on UDK with iOS [Article] Interface Designing for Games in iOS [Article] Linking OpenCV to an iOS project [Article]
Read more
  • 0
  • 0
  • 7007

article-image-getting-ready-launch-your-phonegap-app-real-world
Packt
31 Oct 2014
7 min read
Save for later

Getting Ready to Launch Your PhoneGap App in the Real World

Packt
31 Oct 2014
7 min read
In this article by Yuxian, Eugene Liang, author of PhoneGap and AngularJS for Cross-platform Development, we will run through some of the stuff that you should be doing before launching your app to the world, whether it's through Apple App Store or Google Android Play Store. (For more resources related to this topic, see here.) Using phonegap.com The services on https://build.phonegap.com/ are a straightforward way for you to get your app compiled for various devices. While this is a paid service, there is a free plan if you only have one app that you want to work on. This would be fine in our case. Choose a plan from PhoneGap You will need to have an Adobe ID in order to use PhoneGap services. If not, feel free to create one. Since the process for generating compiled apps from PhoneGap may change, it's best that you visit https://build.phonegap.com/ and sign up for their services and follow their instructions. Preparing your PhoneGap app for an Android release This section generally focuses on things that are specific for the Android platform. This is by no means a comprehensive checklist, but has some of the common tasks that you should go through before releasing your app to the Android world. Testing your app on real devices It is always good to run your app on an actual handset to see how the app is working. To run your PhoneGap app on a real device, issue the following command after you plug your handset into your computer: cordova run android You will see that your app now runs on your handset. Exporting your app to install on other devices In the previous section we talked about installing your app on your device. What if you want to export the APK so that you can test the app on other devices? Here's what you can do: As usual, build your app using cordova build android Alternatively, if you can, run cordova build release The previous step will create an unsigned release APK at /path_to_your_project/platforms/android/ant-build. This app is called YourAppName-release-unsigned.apk. Now, you can simply copy YourAppName-release-unsigned.apk and install it on any android based device you want. Preparing promotional artwork for release In general, you will need to include screenshots of your app for upload to Google Play. In case your device does not allow you to take screenshots, here's what you can do: The first technique that you can use is to simply run your app in the emulator and take screenshots off it. The size of the screenshot may be substantially larger, so you can crop it using GIMP or some other online image resizer. Alternatively, use the web app version and open it in your Google Chrome Browser. Resize your browser window so that it is narrow enough to resemble the width of mobile devices. Building your app for release To build your app for release, you will need Eclipse IDE. To start your Eclipse IDE, navigate to File | New | Project. Next, navigate to Existing Code | Android | Android Project. Click on Browse and select the root directory of your app. The Project to Import window should show platforms/android. Now, select Copy projects into workspace if you want and then click on Finish. Signing the app We have previously exported the app (unsigned) so that we can test it on devices other than those plugged into our computer. However, to release your app to the Play Store, you need to sign them with keys. The steps here are the general steps that you need to follow in order to generate "signed" APK apps to upload your app to the Play Store. Right-click on the project that you have imported in the previous section, and then navigate to Android Tools | Export Signed Application Package. You will see the Project Checks dialog. In the Project Checks dialog, you will see if your project has any errors or not. Next, you should see the Keystore selection dialog. You will now create the key using the app name (without space) and the extension .keystore. Since this app is the first version, there is no prior original name to use. Now, you can browse to the location and save the keystore, and in the same box, give the name of the keystore. In the Keystore election dialog, add your desired password twice and click on Next. You will now see the Key Creation dialog. In the Key Creation dialog, use app_name as your alias (without any spaces) and give the password of your keystore. Feel free to enter 50 for validity (which means the password is valid for 50 years). The remaining fields such as names, organization, and so on are pretty straightforward, so you can just go ahead and fill them in. Finally, select the Destination APK file, which is the location to which you will export your .apk file. Bear in mind that the preceding steps are not a comprehensive list of instructions. For the official documentation, feel free to visit http://developer.android.com/tools/publishing/app-signing.html. Now that we are done with Android, it's time to prepare our app for iOS. iOS As you might already know, preparing your PhoneGap app for Apple App Store requires similar levels, if not more, as compared to your usual Android deployment. In this section, I will not be covering things like making sure your app is in tandem with Apple User Interface guidelines, but rather, how to improve your app before it reaches the App Store. Before we get started, there are some basic requirements: Apple Developer Membership (if you ultimately want to deploy to the App Store) Xcode Running your app on an iOS device If you already have an iOS device, all you need to do is to plug your iOS device to your computer and issue the following command: cordova run ios You should see that your PhoneGap app will build and launch on your device. Note that before running the preceding command, you will need to install the ios-deploy package. You can install it using the following command: sudo npm install –g ios-deploy Other techniques There are other ways to test and deploy your apps. These methods can be useful if you want to deploy your app to your own devices or even for external device testing. Using Xcode Now let's get started with Xcode: After starting your project using the command-line tool and after adding in iOS platform support, you may actually start developing using Xcode. You can start your Xcode and click on Open Other, as shown in the following screenshot: Once you have clicked on Open Other, you will need to browse to your ToDo app folder. Drill down until you see ToDo.xcodeproj (navigate to platforms | ios). Select and open this file. You will see your Xcode device importing the files. After it's all done, you should see something like the following screenshot: Files imported into Xcode Notice that all the files are now imported to your Xcode, and you can start working from here. You can also deploy your app either to devices or simulators: Deploy on your device or on simulators Summary In this article, we went through the basics of packaging your app before submission to the respective app stores. In general, you should have a good idea of how to develop AngularJS apps and apply mobile skins on them so that it can be used on PhoneGap. You should also notice that developing for PhoneGap apps typically takes the pattern of creating a web app first, before converting it to a PhoneGap version. Of course, you may structure your project so that you can build a PhoneGap version from day one, but it may make testing more difficult. Anyway, I hope that you enjoyed this article and feel free to follow me at http://www.liangeugene.com and http://growthsnippets.com. Resources for Article: Further resources on this subject: Using Location Data with PhoneGap [Article] Working with the sharing plugin [Article] Geolocation – using PhoneGap features to improve an app's functionality, write once use everywhere [Article]
Read more
  • 0
  • 0
  • 6563

article-image-so-what-xenmobile
Packt
08 Oct 2013
7 min read
Save for later

So, what is XenMobile?

Packt
08 Oct 2013
7 min read
(For more resources related to this topic, see here.) XenMobile is the next generation of mobile device management (MDM) from Citrix. XenMobile provides organizations the ability to automate most of the administrative tasks on mobile devices for both corporate and highly secured environments. In addition, it can also help organizations in managing bring your own device (BYOD) environments. XenMobile MDM allows administrators to configure role-based management for device provisioning and security for both corporate and employee-owned BYOD devices. When a user enrolls with their mobile device, an organization can provision policies and apps to devices automatically, blacklist or whitelist apps, detect and protect against jail broken devices, and wipe or selectively wipe a device that is lost, stolen, or out of compliance. A selective wipe means that only corporate or sensitive data is deleted, but personal data stays intact on a user's device. XenMobile supports every major mobile OS that is being used today, giving users the freedom to choose and use a device of their choice. Citrix has done an excellent job recognizing that organizations need MDM as a key component for a secure mobile ecosystem. Citrix XenMobile adds other features such as secure @WorkMail, @WorkWeb, and ShareFile integration, so that organizations can securely and safely access e-mail, the Internet, and exchange documents in a secure manner. There are other popular solutions on the market that have similar claims. Unlike other solutions, they rely on container-based solutions which limit native applications. Container-based solutions are applications that embed corporate data, e-mail, contact, and calendar data. Unfortunately, in many cases these solutions break the user experience by limiting how they can use native applications. XenMobile does this without compromising the user experience, allowing the secure applications to exist and share the same calendar, contact, and other key integration points on the mobile device. They were the only vendors at the time of writing this article, that had a single management platform which provided MDM features with secure storage, integrated VDI, multitenant, and application load balancing features, which we believe are some of the differentiators between XenMobile and its competitors. Citrix XenMobile MDM Architecture Mobile application stores (MAS) and mobile application management (MAN) are the concepts which you manage and secure access to individual applications on a mobile device, but leave the rest of the mobile device unmanaged. In some scenarios, people consider this as a great way of managing BYOD environments because organizations only need to worry about the applications and the data they manage. XenMobile has support for mobile application management and supports individual application policies, in addition to the holistic device policies found on other competing products. In this article, you will gain a deep understanding of XenMobile and its key features. You will learn how to install, configure, and use XenMobile in your environment to manage corporate and BYOD environments. We will then explore how to get started with XenMobile, configure policies and security, and how to deploy XenMobile in our organization. Next, we will look at some of the advanced features in XenMobile, how and when to use them, how to manage compliance breaches, and other top features. Finally, we will explore what do next when you have XenMobile configured. Welcome to the world of XenMobile MDM. Let's get started. Mobile device management (MDM) is a software solution that helps the organizations to manage, provision, and secure the lifecycle of a mobile device. MDM systems allow enterprises to mass deploy policies, settings, and applications to mobile devices. These features can include provisioning the mobile devices for Wi-Fi access, corporate e-mail, develop in-house applications, tracking locations, and remote wipe. Mobile device management solutions for enterprise corporations provide these capabilities over the air and for multiple mobile operating systems. Blackberry can be considered as the world's first real mobile enterprise solution with their product Blackberry Enterprise Server (BES). BES is still considered as a very capable and well-respected MDM solution. Blackberry devices were one of the first devices that provided organizations an accurate control of their users' mobile devices. The Blackberry device was essentially a dumb device until it was connected to a BES server. Once connected to a BES server, the Blackberry device would download policies, which would govern what features the device could use. This included everything from voice roaming, Internet usage, and even camera and storage policies. Because of its detailed configurability, Blackberry devices became the standard for most corporations wanting to use mobile devices and securing them. Apple and Google have made the smartphone a mainstream device and the tablet the computing platform of choice. People ended up waiting days in line to buy the latest gadget, and once they had it, you better believe they wanted to use it all the time. All of a sudden, organizations were getting hundreds of people wanting to connect their personal devices to the corporate network in order to work more efficiently with a device they enjoyed. The revolution of consumerization of IT had begun. In addition to Apple and Google devices, XenMobile supports Blackberry, Windows Phone, and other well-known mobile operating systems. Many vendors rushed to bring solutions to organizations to help them manage their Apple and mobile devices in enterprise architectures. Vendors tried to give organizations the same management and security that Blackberry had provided them with previous BES features. Over the years, Apple and Google both recognized the need for mobile management and started building mobile device management features in their operating system, so that MDM solutions could provide better granular management and security control for enterprise organizations. Today organizations are replacing older mobile devices in favor of Apple and Google devices. They feel comfortable in having these devices connected to corporate networks because they believe that they can manage them and secure them with MDM solutions. MDM solutions are the platform for organizations to ensure that mobile devices meet the technical, legal, and business compliance needed for their users to use devices of their choice, that are modern, and in many cases more productive than their legacy counterparts. MDM vendors have chosen to be container-based solutions, or device-based management. Container-based solutions provide segmentation of device data and allow organizations to completely ignore the rest of the device since all corporate data is self-contained. A good analogy for container-based solutions is Outlook Web Access. Outlook Web Access allows any computer to access Exchange email through a web browser. Computer software and applications are completely agnostic to corporate e-mail. Container-based solutions are similar, since they are indifferent to the mobile device data and other configuration components when being used to access an organization's resources, for example, e-mail on a mobile phone. Device-based management solutions allow organizations to manage device and application settings, but can only enforce security policies based on the features made available to them by device manufacturers. XenMobile is a device-based management solution, however, it has many of the features found in container-based solutions giving organizations the best of both worlds. Summary This article briefs about the functionalities of XenMobile and it covers XenMobile's features, gives an idea to the user regarding XenMobile. Resources for Article: Further resources on this subject: Creating mobile friendly themes [Article] Creating and configuring a basic mobile application [Article] Mobiles First – How and Why [Article]
Read more
  • 0
  • 0
  • 6484

article-image-tracking-objects-videos
Packt
12 Aug 2015
13 min read
Save for later

Tracking Objects in Videos

Packt
12 Aug 2015
13 min read
In this article by Salil Kapur and Nisarg Thakkar, authors of the book Mastering OpenCV Android Application Programming, we will look at the broader aspects of object tracking in Videos. Object tracking is one of the most important applications of computer vision. It can be used for many applications, some of which are as follows: Human–computer interaction: We might want to track the position of a person's finger and use its motion to control the cursor on our machines Surveillance: Street cameras can capture pedestrians' motions that can be tracked to detect suspicious activities Video stabilization and compression Statistics in sports: By tracking a player's movement in a game of football, we can provide statistics such as distance travelled, heat maps, and so on In this article, you will learn the following topics: Optical flow Image Pyramids (For more resources related to this topic, see here.) Optical flow Optical flow is an algorithm that detects the pattern of the motion of objects, or edges, between consecutive frames in a video. This motion may be caused by the motion of the object or the motion of the camera. Optical flow is a vector that depicts the motion of a point from the first frame to the second. The optical flow algorithm works under two basic assumptions: The pixel intensities are almost constant between consecutive frames The neighboring pixels have the same motion as the anchor pixel We can represent the intensity of a pixel in any frame by f(x,y,t). Here, the parameter t represents the frame in a video. Let's assume that, in the next dt time, the pixel moves by (dx,dy). Since we have assumed that the intensity doesn't change in consecutive frames, we can say: f(x,y,t) = f(x + dx,y + dy,t + dt) Now we take the Taylor series expansion of the RHS in the preceding equation: Cancelling the common term, we get: Where . Dividing both sides of the equation by dt we get: This equation is called the optical flow equation. Rearranging the equation we get: We can see that this represents the equation of a line in the (u,v) plane. However, with only one equation available and two unknowns, this problem is under constraint at the moment. The Horn and Schunck method By taking into account our assumptions, we get: We can say that the first term will be small due to our assumption that the brightness is constant between consecutive frames. So, the square of this term will be even smaller. The second term corresponds to the assumption that the neighboring pixels have similar motion to the anchor pixel. We need to minimize the preceding equation. For this, we differentiate the preceding equation with respect to u and v. We get the following equations: Here, and  are the Laplacians of u and v respectively. The Lucas and Kanade method We start off with the optical flow equation that we derived earlier and noticed that it is under constrained as it has one equation and two variables: To overcome this problem, we make use of the assumption that pixels in a 3x3 neighborhood have the same optical flow: We can rewrite these equations in the form of matrices, as shown here: This can be rewritten in the form: Where: As we can see, A is a 9x2 matrix, U is a 2x1 matrix, and b is a 9x1 matrix. Ideally, to solve for U, we just need to multiply by A-1on both sides of the equation. However, this is not possible, as we can only take the inverse of square matrices. Thus, we try to transform A into a square matrix by first multiplying the equation by AT on both sides of the equation: Now is a square matrix of dimension 2x2. Hence, we can take its inverse: On solving this equation, we get: This method of multiplying the transpose and then taking an inverse is called pseudo-inverse. This equation can also be obtained by finding the minimum of the following equation: According to the optical flow equation and our assumptions, this value should be equal to zero. Since the neighborhood pixels do not have exactly the same values as the anchor pixel, this value is very small. This method is called Least Square Error. To solve for the minimum, we differentiate this equation with respect to u and v, and equate it to zero. We get the following equations: Now we have two equations and two variables, so this system of equations can be solved. We rewrite the preceding equations as follows: So, by arranging these equations in the form of a matrix, we get the same equation as obtained earlier: Since, the matrix A is now a 2x2 matrix, it is possible to take an inverse. On taking the inverse, the equation obtained is as follows: This can be simplified as: Solving for u and v, we get: Now we have the values for all the , , and . Thus, we can find the values of u and v for each pixel. When we implement this algorithm, it is observed that the optical flow is not very smooth near the edges of the objects. This is due to the brightness constraint not being satisfied. To overcome this situation, we use image pyramids. Checking out the optical flow on Android To see the optical flow in action on Android, we will create a grid of points over a video feed from the camera, and then the lines will be drawn for each point that will depict the motion of the point on the video, which is superimposed by the point on the grid. Before we begin, we will set up our project to use OpenCV and obtain the feed from the camera. We will process the frames to calculate the optical flow. First, create a new project in Android Studio. We will set the activity name to MainActivity.java and the XML resource file as activity_main.xml. Second, we will give the app the permissions to access the camera. In the AndroidManifest.xml file, add the following lines to the manifest tag: <uses-permission android_name="android.permission.CAMERA" /> Make sure that your activity tag for MainActivity contains the following line as an attribute: android:screenOrientation="landscape" Our activity_main.xml file will contain a simple JavaCameraView. This is a custom OpenCV defined layout that enables us to access the camera frames and processes them as normal Mat objects. The XML code has been shown here: <LinearLayout       android_layout_width="match_parent"    android_layout_height="match_parent"    android_orientation="horizontal">      <org.opencv.android.JavaCameraView        android_layout_width="fill_parent"        android_layout_height="fill_parent"        android_id="@+id/main_activity_surface_view" />   </LinearLayout> Now, let's work on some Java code. First, we'll define some global variables that we will use later in the code: private static final String   TAG = "com.packtpub.masteringopencvandroid.chapter5.MainActivity";      private static final int       VIEW_MODE_KLT_TRACKER = 0;    private static final int       VIEW_MODE_OPTICAL_FLOW = 1;      private int                   mViewMode;    private Mat                   mRgba;    private Mat                   mIntermediateMat;    private Mat                   mGray;    private Mat                   mPrevGray;      MatOfPoint2f prevFeatures, nextFeatures;    MatOfPoint features;      MatOfByte status;    MatOfFloat err;      private MenuItem               mItemPreviewOpticalFlow, mItemPreviewKLT;      private CameraBridgeViewBase   mOpenCvCameraView; We will need to create a callback function for OpenCV, like we did earlier. In addition to the code we used earlier, we will also enable CameraView to capture frames for processing: private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {        @Override        public void onManagerConnected(int status) {            switch (status) {                case LoaderCallbackInterface.SUCCESS:                {                    Log.i(TAG, "OpenCV loaded successfully");                      mOpenCvCameraView.enableView();                } break;                default:                {                    super.onManagerConnected(status);                } break;            }        }    }; We will now check whether the OpenCV manager is installed on the phone, which contains the required libraries. In the onResume function, add the following line of code: OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_10,   this, mLoaderCallback); In the onCreate() function, add the following line before calling setContentView to prevent the screen from turning off, while using the app: getWindow().addFlags(WindowManager.LayoutParams. FLAG_KEEP_SCREEN_ON); We will now initialize our JavaCameraView object. Add the following lines after setContentView has been called: mOpenCvCameraView = (CameraBridgeViewBase)   findViewById(R.id.main_activity_surface_view); mOpenCvCameraView.setCvCameraViewListener(this); Notice that we called setCvCameraViewListener with the this parameter. For this, we need to make our activity implement the CvCameraViewListener2 interface. So, your class definition for the MainActivity class should look like the following code: public class MainActivity extends Activity   implements CvCameraViewListener2 We will add a menu to this activity to toggle between different examples in this article. Add the following lines to the onCreateOptionsMenu function: mItemPreviewKLT = menu.add("KLT Tracker"); mItemPreviewOpticalFlow = menu.add("Optical Flow"); We will now add some actions to the menu items. In the onOptionsItemSelected function, add the following lines: if (item == mItemPreviewOpticalFlow) {            mViewMode = VIEW_MODE_OPTICAL_FLOW;            resetVars();        } else if (item == mItemPreviewKLT){            mViewMode = VIEW_MODE_KLT_TRACKER;            resetVars();        }          return true; We used a resetVars function to reset all the Mat objects. It has been defined as follows: private void resetVars(){        mPrevGray = new Mat(mGray.rows(), mGray.cols(), CvType.CV_8UC1);        features = new MatOfPoint();        prevFeatures = new MatOfPoint2f();        nextFeatures = new MatOfPoint2f();        status = new MatOfByte();        err = new MatOfFloat();    } We will also add the code to make sure that the camera is released for use by other applications, whenever our application is suspended or killed. So, add the following snippet of code to the onPause and onDestroy functions: if (mOpenCvCameraView != null)            mOpenCvCameraView.disableView(); After the OpenCV camera has been started, the onCameraViewStarted function is called, which is where we will add all our object initializations: public void onCameraViewStarted(int width, int height) {        mRgba = new Mat(height, width, CvType.CV_8UC4);        mIntermediateMat = new Mat(height, width, CvType.CV_8UC4);        mGray = new Mat(height, width, CvType.CV_8UC1);        resetVars();    } Similarly, the onCameraViewStopped function is called when we stop capturing frames. Here we will release all the objects we created when the view was started: public void onCameraViewStopped() {        mRgba.release();        mGray.release();        mIntermediateMat.release();    } Now we will add the implementation to process each frame of the feed that we captured from the camera. OpenCV calls the onCameraFrame method for each frame, with the frame as a parameter. We will use this to process each frame. We will use the viewMode variable to distinguish between the optical flow and the KLT tracker, and have different case constructs for the two: public Mat onCameraFrame(CvCameraViewFrame inputFrame) {        final int viewMode = mViewMode;        switch (viewMode) {            case VIEW_MODE_OPTICAL_FLOW: We will use the gray()function to obtain the Mat object that contains the captured frame in a grayscale format. OpenCV also provides a similar function called rgba() to obtain a colored frame. Then we will check whether this is the first run. If this is the first run, we will create and fill up a features array that stores the position of all the points in a grid, where we will compute the optical flow:                mGray = inputFrame.gray();                if(features.toArray().length==0){                   int rowStep = 50, colStep = 100;                    int nRows = mGray.rows()/rowStep, nCols = mGray.cols()/colStep;                      Point points[] = new Point[nRows*nCols];                    for(int i=0; i<nRows; i++){                        for(int j=0; j<nCols; j++){                            points[i*nCols+j]=new Point(j*colStep, i*rowStep);                        }                    }                      features.fromArray(points);                      prevFeatures.fromList(features.toList());                    mPrevGray = mGray.clone();                    break;                } The mPrevGray object refers to the previous frame in a grayscale format. We copied the points to a prevFeatures object that we will use to calculate the optical flow and store the corresponding points in the next frame in nextFeatures. All of the computation is carried out in the calcOpticalFlowPyrLK OpenCV defined function. This function takes in the grayscale version of the previous frame, the current grayscale frame, an object that contains the feature points whose optical flow needs to be calculated, and an object that will store the position of the corresponding points in the current frame:                nextFeatures.fromArray(prevFeatures.toArray());                Video.calcOpticalFlowPyrLK(mPrevGray, mGray,                    prevFeatures, nextFeatures, status, err); Now, we have the position of the grid of points and their position in the next frame as well. So, we will now draw a line that depicts the motion of each point on the grid:                List<Point> prevList=features.toList(), nextList=nextFeatures.toList();                Scalar color = new Scalar(255);                  for(int i = 0; i<prevList.size(); i++){                    Core.line(mGray, prevList.get(i), nextList.get(i), color);                } Before the loop ends, we have to copy the current frame to mPrevGray so that we can calculate the optical flow in the subsequent frames:                mPrevGray = mGray.clone();                break; default: mViewMode = VIEW_MODE_OPTICAL_FLOW; After we end the switch case construct, we will return a Mat object. This is the image that will be displayed as an output to the user of the application. Here, since all our operations and processing were performed on the grayscale image, we will return this image: return mGray; So, this is all about optical flow. The result can be seen in the following image: Optical flow at various points in the camera feed Image pyramids Pyramids are multiple copies of the same images that differ in their sizes. They are represented as layers, as shown in the following figure. Each level in the pyramid is obtained by reducing the rows and columns by half. Thus, effectively, we make the image's size one quarter of its original size: Relative sizes of pyramids Pyramids intrinsically define reduce and expand as their two operations. Reduce refers to a reduction in the image's size, whereas expand refers to an increase in its size. We will use a convention that lower levels in a pyramid mean downsized images and higher levels mean upsized images. Gaussian pyramids In the reduce operation, the equation that we use to successively find levels in pyramids, while using a 5x5 sliding window, has been written as follows. Notice that the size of the image reduces to a quarter of its original size: The elements of the weight kernel, w, should add up to 1. We use a 5x5 Gaussian kernel for this task. This operation is similar to convolution with the exception that the resulting image doesn't have the same size as the original image. The following image shows you the reduce operation: The reduce operation The expand operation is the reverse process of reduce. We try to generate images of a higher size from images that belong to lower layers. Thus, the resulting image is blurred and is of a lower resolution. The equation we use to perform expansion is as follows: The weight kernel in this case, w, is the same as the one used to perform the reduce operation. The following image shows you the expand operation: The expand operation The weights are calculated using the Gaussian function to perform Gaussian blur. Summary In this article, we have seen how to detect a local and global motion in a video, and how we can track objects. We have also learned about Gaussian pyramids, and how they can be used to improve the performance of some computer vision tasks. Resources for Article: Further resources on this subject: New functionality in OpenCV 3.0 [article] Seeing a Heartbeat with a Motion Amplifying Camera [article] Camera Calibration [article]
Read more
  • 0
  • 0
  • 6468
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-getting-started-cocos2d
Packt
28 Feb 2011
8 min read
Save for later

Getting Started With Cocos2d

Packt
28 Feb 2011
8 min read
  Cocos2d for iPhone 0.99 Beginner's Guide Make mind-blowing 2D games for iPhone with this fast, flexible, and easy-to-use framework! A cool guide to learning cocos2d with iPhone to get you into the iPhone game industry quickly Learn all the aspects of cocos2d while building three different games Add a lot of trendy features such as particles and tilemaps to your games to captivate your players Full of illustrations, diagrams, and tips for building iPhone games, with clear step-by-step instructions and practical examples         Read more about this book       (For more resources on this subject, see here.) Downloading Cocos2d for iPhone Visit http://www.cocos2d-iphone.org for downloading the latest files. Search the downloads page, and there you will find the different versions available. As of this writing, Version 0.99.5 is the latest stable version. Once you have downloaded the file to your computer, uncompress the folder to your desktop, and rename it to Cocos2d. Open the uncompressed folder and take a look at its contents. Inside that folder, you will find everything you will need to make a game with Cocos2d. The following is a list of the important folders and files: Cocos2d: The bread and butter of Cocos2d. CocosDenshion: The sound engine that comes along. Cocoslive: Cocos2d comes packed with its own highscore server. You can easily set it up to create a scoreboard and ranking for your game. External: Here are the sources for Box2d and Chipmunk physics engines among other things. Extras: Useful classes created by the community. Resources: Images, sounds, tilemaps, and so on, used in the tests. Templates: The contents of the templates we'll soon install. Test: The samples used to demonstrate all the things Cocos2d can do. Tools: Some tools you may find useful, such as a ParticleView project that lets you preview how particles will look. License files: They are here in case you need to look at them. We'll start by taking a look at the samples. Time for action – opening the samples project Inside the cocos2d-iphone folder, you will find a file named cocos2d-iphone. xcodeproj. This project contains all the samples and named tests that come with the source code. Let's run one of them. Open the .xcodeproj file: In the groups and files panel you will find the source code of all the samples. There are more than 35 samples that cover all of the capabilities of Cocos2d, as shown in the following screenshot: Compile and run the SpriteTest: This project comes with a target for each of the available tests, so in order to try any of them you have to select them from the overview dropdown. Go ahead and select the SpriteTest target and click on Build and Run. If everything goes well the test should start running. Play a little with it, feel how it behaves, and be amazed by the endless possibilities.   What just happened?   You have just run your first Cocos2d application. If you want to run another sample, just select it by changing the "Active target". When you do so, the "active executable" should match to the same; if it doesn't select it manually by selecting it from the overview dropdown box. You can see which "active target" and "active executable" is selected from that overview dropdown box, they should be selected. Installing the templates Cocos2d comes with three templates. These templates are the starting point for any Cocos2d game. They let you: Create a simple Cocos2d project Create a Cocos2d project with Chipmunk as physics engine Create a Cocos2d project with Box2d as physics engine Which one you decide to use for your project depends on your needs. Right now we'll create a simple project from the first template. Time for action – installing the templates Carry out the following steps for installing the templates: Open the terminal. It is located in Applications Utilities | Terminal|. Assuming that you have uncompressed the folder on your desktop and renamed it as Cocos2d, type the following: cd desktop/Cocos2d ./install_template.sh You will see a lot of output in the terminal (as shown in the following screenshot); read it to check if the templates were installed successfully. If you are getting errors, check if you have downloaded the files correctly and uncompressed them into the desktop. If it is in another place you may get errors. We just installed the Xcode templates for Cocos2d. Now, each time you create a new project in Xcode, you will be given the choice of doing it by using a Cocos2d application. We'll see how to do that in a moment. Each time a new version of Cocos2d is released, there is a template file for that version. So, you should remember to install them each time. If you have a lot of older templates and you want to remove them, you can do so by going to Developer/Platforms/iPhoneOS. platform/Developer/Library/Xcode/Project Templates/Application and deleting the respective folder.   What just happened?   Templates are very useful and they are a great starting point for any new project. Having these templates available makes starting a new project an easy task. You will have all the Cocos2d sources arranged in groups, your AppDelegate already configured to make use of the framework and even a simple starting Layer. Creating a new project from the templates Now that you have the templates ready to use, it is time to make your first project. Time for action – creating a HelloCocos2d project We are going to create a new project named HelloCocos2d from the templates you have just installed. This won't be anything like a game, but just an introduction on how to get started. The steps are as follows: Open Xcode and select File New project| (Shift + cmd + N). 2. Cocos2d templates will appear right there along with the other Xcode project templates, as shown in the following screenshot: Select Cocos2d-0.99.1 Application. Name the project HelloCocos2d and save it to your Documents folder. Once you perform the preceding steps, the project you just created should open up. Let's take a look at the important folders that were generated: Cocos2d Sources: This is where the Cocos2d source code resides. Generally, you won't touch anything from here unless you want to make modifications to the framework or want to know how it works. Classes: This is where the classes you create will reside. As you may see, two classes were automatically created, thanks to the template. We'll explore them in a moment. Resources: This is where you will include all the assets needed for your game. Go ahead and run the application. Click on Build and go and congratulate yourself, as you have created your first Cocos2d project. Let's stop for a moment and take a look at what was created here. When you run the application you'll notice a couple of things, as follows: The Cocos2d for iPhone logo shows up as soon as the application starts. Then a CCLabel is created with the text Hello World. You can see some numbers in the lower-left corner of the screen. That is the current FPS the game is running at. In a moment, we'll see how this is achieved by taking a look at the generated classes.   What just happened?   We have just created our first project from one of the templates that we installed before. As you can see, using those templates makes starting a Cocos2d project quite easy and lets you get your hands on the actual game's code sooner. Managing the game with the CCDirector The CCDirector is the class whose main purpose is scene management. It is responsible for switching scenes, setting the desired FPS, the device orientation, and a lot of other things. The CCDirector is the class responsible for initializing OpenGL ES. If you grab an older Cocos2d project you might notice that all Cocos2d classes have the "CC" prefix missing. Those were added recently to avoid naming problems. Objective-c doesn't have the concept of namespaces, so if Apple at some point decided to create a Director class, those would collide. Types of CCDirectors There are currently four types of directors; for most applications, you will want to use the default one: NSTimer Director: It triggers the main loop from an NSTimer object. It is the slowest of the Directors, but it integrates well with UIKit objects. You can also customize the update interval from 1 to 60. Mainloop Director: It triggers the main loop from a custom main loop. It is faster than the NSTimer Director, but it does not integrate well with UIKit objects, and its interval update can't be customized. ThreadMainLoop Director: It has the same advantages and limitations as the Mainloop Director. When using this type of Director, the main loop is triggered from a thread, but it will be executed on the main thread. DisplayLink Director: This type of Director is only available in OS 3.1+. This is the one used by default. It is faster than the NSTimer Director and integrates well with UIKit objects. The interval update can be set to 1/60, 1/30, or 1/15. Those are the available Directors, most of the times you will not need to make any changes here.
Read more
  • 0
  • 0
  • 6429

article-image-unit-and-functional-tests
Packt
21 Aug 2014
13 min read
Save for later

Unit and Functional Tests

Packt
21 Aug 2014
13 min read
In this article by Belén Cruz Zapata and Antonio Hernández Niñirola, authors of the book Testing and Securing Android Studio Applications, you will learn how to use unit tests that allow developers to quickly verify the state and behavior of an activity on its own. (For more resources related to this topic, see here.) Testing activities There are two possible modes of testing activities: Functional testing: In functional testing, the activity being tested is created using the system infrastructure. The test code can communicate with the Android system, send events to the UI, or launch another activity. Unit testing: In unit testing, the activity being tested is created with minimal connection to the system infrastructure. The activity is tested in isolation. In this article, we will explore the Android testing API to learn about the classes and methods that will help you test the activities of your application. The test case classes The Android testing API is based on JUnit. Android JUnit extensions are included in the android.test package. The following figure presents the main classes that are involved when testing activities: Let's learn more about these classes: TestCase: This JUnit class belongs to the junit.framework. The TestCase package represents a general test case. This class is extended by the Android API. InstrumentationTestCase: This class and its subclasses belong to the android.test package. It represents a test case that has access to instrumentation. ActivityTestCase: This class is used to test activities, but for more useful classes, you should use one of its subclasses instead of the main class. ActivityInstrumentationTestCase2: This class provides functional testing of an activity and is parameterized with the activity under test. For example, to evaluate your MainActivity, you have to create a test class named MainActivityTest that extends the ActivityInstrumentationTestCase2 class, shown as follows: public class MainActivityTest extends ActivityInstrumentationTestCase2<MainActivity> ActivityUnitTestCase: This class provides unit testing of an activity and is parameterized with the activity under test. For example, to evaluate your MainActivity, you can create a test class named MainActivityUnitTest that extends the ActivityUnitTestCase class, shown as follows: public class MainActivityUnitTest extends ActivityUnitTestCase<MainActivity> There is a new term that has emerged from the previous classes called Instrumentation. Instrumentation The execution of an application is ruled by the life cycle, which is determined by the Android system. For example, the life cycle of an activity is controlled by the invocation of some methods: onCreate(), onResume(), onDestroy(), and so on. These methods are called by the Android system and your code cannot invoke them, except while testing. The mechanism to allow your test code to invoke callback methods is known as Android instrumentation. Android instrumentation is a set of methods to control a component independent of its normal lifecycle. To invoke the callback methods from your test code, you have to use the classes that are instrumented. For example, to start the activity under test, you can use the getActivity() method that returns the activity instance. For each test method invocation, the activity will not be created until the first time this method is called. Instrumentation is necessary to test activities considering the lifecycle of an activity is based on the callback methods. These callback methods include the UI events as well. From an instrumented test case, you can use the getInstrumentation() method to get access to an Instrumentation object. This class provides methods related to the system interaction with the application. The complete documentation about this class can be found at: http://developer.android.com/reference/android/app/Instrumentation.html. Some of the most important methods are as follows: The addMonitor method: This method adds a monitor to get information about a particular type of Intent and can be used to look for the creation of an activity. A monitor can be created indicating IntentFilter or displaying the name of the activity to the monitor. Optionally, the monitor can block the activity start to return its canned result. You can use the following call definitions to add a monitor: ActivityMonitor addMonitor (IntentFilter filter, ActivityResult result, boolean block). ActivityMonitor addMonitor (String cls, ActivityResult result, boolean block). The following line is an example line code to add a monitor: Instrumentation.ActivityMonitor monitor = getInstrumentation().addMonitor(SecondActivity.class.getName(), null, false); The activity lifecycle methods: The methods to call the activity lifecycle methods are: callActivityOnCreate, callActivityOnDestroy, callActivityOnPause, callActivityOnRestart, callActivityOnResume, callActivityOnStart, finish, and so on. For example, you can pause an activity using the following line code: getInstrumentation().callActivityOnPause(mActivity); The getTargetContext method: This method returns the context for the application. The startActivitySync method: This method starts a new activity and waits for it to begin running. The function returns when the new activity has gone through the full initialization after the call to its onCreate method. The waitForIdleSync method: This method waits for the application to be idle synchronously. The test case methods JUnit's TestCase class provides the following protected methods that can be overridden by the subclasses: setUp(): This method is used to initialize the fixture state of the test case. It is executed before every test method is run. If you override this method, the first line of code will call the superclass. A standard setUp method should follow the given code definition: @Override protected void setUp() throws Exception { super.setUp(); // Initialize the fixture state } tearDown(): This method is used to tear down the fixture state of the test case. You should use this method to release resources. It is executed after running every test method. If you override this method, the last line of the code will call the superclass, shown as follows: @Override protected void tearDown() throws Exception { // Tear down the fixture state super.tearDown(); } The fixture state is usually implemented as a group of member variables but it can also consist of database or network connections. If you open or init connections in the setUp method, they should be closed or released in the tearDown method. When testing activities in Android, you have to initialize the activity under test in the setUp method. This can be done with the getActivity() method. The Assert class and method JUnit's TestCase class extends the Assert class, which provides a set of assert methods to check for certain conditions. When an assert method fails, AssertionFailedException is thrown. The test runner will handle the multiple assertion exceptions to present the testing results. Optionally, you can specify the error message that will be shown if the assert fails. You can read the Android reference of the TestCase class to examine all the available methods at http://developer.android.com/reference/junit/framework/Assert.html. The assertion methods provided by the Assert superclass are as follows: assertEquals: This method checks whether the two values provided are equal. It receives the actual and expected value that is to be compared with each other. This method is overloaded to support values of different types, such as short, String, char, int, byte, boolean, float, double, long, or Object. For example, the following assertion method throws an exception since both values are not equal: assertEquals(true, false); assertTrue or assertFalse: These methods check whether the given Boolean condition is true or false. assertNull or assertNotNull: These methods check whether an object is null or not. assertSame or assertNotSame: These methods check whether two objects refer to the same object or not. fail: This method fails a test. It can be used to make sure that a part of code is never reached, for example, if you want to test that a method throws an exception when it receives a wrong value, as shown in the following code snippet: try{ dontAcceptNullValuesMethod(null); fail("No exception was thrown"); } catch (NullPointerExceptionn e) { // OK } The Android testing API, which extends JUnit, provides additional and more powerful assertion classes: ViewAsserts and MoreAsserts. The ViewAsserts class The assertion methods offered by JUnit's Assert class are not enough if you want to test some special Android objects such as the ones related to the UI. The ViewAsserts class implements more sophisticated methods related to the Android views, that is, for the View objects. The whole list with all the assertion methods can be explored in the Android reference about this class at http://developer.android.com/reference/android/test/ViewAsserts.html. Some of them are described as follows: assertBottomAligned or assertLeftAligned or assertRightAligned or assertTopAligned(View first, View second): These methods check that the two specified View objects are bottom, left, right, or top aligned, respectively assertGroupContains or assertGroupNotContains(ViewGroup parent, View child): These methods check whether the specified ViewGroup object contains the specified child View assertHasScreenCoordinates(View origin, View view, int x, int y): This method checks that the specified View object has a particular position on the origin screen assertHorizontalCenterAligned or assertVerticalCenterAligned(View reference View view): These methods check that the specified View object is horizontally or vertically aligned with respect to the reference view assertOffScreenAbove or assertOffScreenBelow(View origin, View view): These methods check that the specified View object is above or below the visible screen assertOnScreen(View origin, View view): This method checks that the specified View object is loaded on the screen even if it is not visible The MoreAsserts class The Android API extends some of the basic assertion methods from the Assert class to present some additional methods. Some of the methods included in the MoreAsserts class are: assertContainsRegex(String expectedRegex, String actual): This method checks that the expected regular expression (regex) contains the actual given string assertContentsInAnyOrder(Iterable<?> actual, Object… expected): This method checks that the iterable object contains the given objects and in any order assertContentsInOrder(Iterable<?> actual, Object… expected): This method checks that the iterable object contains the given objects, but in the same order assertEmpty: This method checks if a collection is empty assertEquals: This method extends the assertEquals method from JUnit to cover collections: the Set objects, int arrays, String arrays, Object arrays, and so on assertMatchesRegex(String expectedRegex, String actual): This method checks whether the expected regex matches the given actual string exactly Opposite methods such as assertNotContainsRegex, assertNotEmpty, assertNotEquals, and assertNotMatchesRegex are included as well. All these methods are overloaded to optionally include a custom error message. The Android reference about the MoreAsserts class can be inspected to learn more about these assert methods at http://developer.android.com/reference/android/test/MoreAsserts.html. UI testing and TouchUtils The test code is executed in two different threads as the application under test, although, both the threads run in the same process. When testing the UI of an application, UI objects can be referenced from the test code, but you cannot change their properties or send events. There are two strategies to invoke methods that should run in the UI thread: Activity.runOnUiThread(): This method creates a Runnable object in the UI thread in which you can add the code in the run() method. For example, if you want to request the focus of a UI component: public void testComponent() { mActivity.runOnUiThread( new Runnable() { public void run() { mComponent.requestFocus(); } } ); … } @UiThreadTest: This annotation affects the whole method because it is executed on the UI thread. Considering the annotation refers to an entire method, statements that do not interact with the UI are not allowed in it. For example, consider the previous example using this annotation, shown as follows: @UiThreadTest public void testComponent () { mComponent.requestFocus(); … } There is also a helper class that provides methods to perform touch interactions on the view of your application: TouchUtils. The touch events are sent to the UI thread safely from the test thread; therefore, the methods of the TouchUtils class should not be invoked in the UI thread. Some of the methods provided by this helper class are as follows: The clickView method: This method simulates a click on the center of a view The drag, dragQuarterScreenDown, dragViewBy, dragViewTo, dragViewToTop methods: These methods simulate a click on an UI element and then drag it accordingly The longClickView method: This method simulates a long press click on the center of a view The scrollToTop or scrollToBottom methods: These methods scroll a ViewGroup to the top or bottom The mock object classes The Android testing API provides some classes to create mock system objects. Mock objects are fake objects that simulate the behavior of real objects but are totally controlled by the test. They allow isolation of tests from the rest of the system. Mock objects can, for example, simulate a part of the system that has not been implemented yet, or a part that is not practical to be tested. In Android, the following mock classes can be found: MockApplication, MockContext, MockContentProvider, MockCursor, MockDialogInterface, MockPackageManager, MockResources, and MockContentResolver. These classes are under the android.test.mock package. The methods of these objects are nonfunctional and throw an exception if they are called. You have to override the methods that you want to use. Creating an activity test In this section, we will create an example application so that we can learn how to implement the test cases to evaluate it. Some of the methods presented in the previous section will be put into practice. You can download the example code files from your account at http://www.packtpub.com. Our example is a simple alarm application that consists of two activities: MainActivity and SecondActivity. The MainActivity implements a self-built digital clock using text views and buttons. The purpose of creating a self-built digital clock is to have more code and elements to use in our tests. The layout of MainActivity is a relative one that includes two text views: one for the hour (the tvHour ID) and one for the minutes (the tvMinute ID). There are two buttons below the clock: one to subtract 10 minutes from the clock (the bMinus ID) and one to add 10 minutes to the clock (the bPlus ID). There is also an edit text field to specify the alarm name. Finally, there is a button to launch the second activity (the bValidate ID). Each button has a pertinent method that receives the click event when the button is pressed. The layout looks like the following screenshot: The SecondActivity receives the hour from the MainActivity and shows its value in a text view simulating that the alarm was saved. The objective to create this second activity is to be able to test the launch of another activity in our test case. Summary In this article, you learned how to use unit tests that allow developers to quickly verify the state and behavior of an activity on its own. Resources for Article: Further resources on this subject: Creating Dynamic UI with Android Fragments [article] Saying Hello to Unity and Android [article] Augmented Reality [article]
Read more
  • 0
  • 0
  • 6345

article-image-prerequisites-map-application
Packt
16 Sep 2015
10 min read
Save for later

Prerequisites for a Map Application

Packt
16 Sep 2015
10 min read
In this article by Raj Amal, author of the book Learning Android Google Maps, we will cover the following topics: Generating an SHA1 fingerprint in the Windows, Linux, and Mac OS X Registering our application in the Google Developer Console Configuring Google Play services with our application Adding permissions and defining an API key Generating the SHA1 fingerprint Let's learn about generating the SHA1 fingerprint in different platforms one by one. Windows The keytool usually comes with the JDK package. We use the keytool to generate the SHA1 fingerprint. Navigate to the bin directory in your default JDK installation location, which is what you configured in the JAVA_HOME variable, for example, C:Program FilesJavajdk 1.7.0_71. Then, navigate to File | Open command prompt. Now, the command prompt window will open. Enter the following command, and then hit the Enter key: keytool -list -v -keystore "%USERPROFILE%.androiddebug.keystore" - alias androiddebugkey -storepass android -keypass android You will see output similar to what is shown here: Valid from: Sun Nov 02 16:49:26 IST 2014 until: Tue Oct 25 16:49:26 IST 2044 Certificate fingerprints: MD5: 55:66:D0:61:60:4D:66:B3:69:39:23:DB:84:15:AE:17 SHA1: C9:44:2E:76:C4:C2:B7:64:79:78:46:FD:9A:83:B7:90:6D:75:94:33 In the preceding output, note down the SHA1 value that is required to register our application with the Google Developer Console: The preceding screenshot is representative of the typical output screen that is shown when the preceding command is executed. Linux We are going to obtain the SHA1 fingerprint from the debug.keystore file, which is present in the .android folder in your home directory. If you install Java directly from PPA, open the terminal and enter the following command: keytool -list -v -keystore ~/.android/debug.keystore -alias androiddebugkey - storepass android -keypass android This will return an output similar to the one we've obtained in Windows. Note down the SHA1 fingerprint, which we will use later. If you've installed Java manually, you'll need to run a keytool from the keytool location. You can export the Java JDK path as follows: export JAVA_HOME={PATH to JDK} After exporting the path, run the keytool as follows: $JAVA_HOME/bin/keytool -list -v -keystore ~/.android/debug.keystore - alias androiddebugkey -storepass android -keypass android The output of the preceding command is shown as follows: Mac OS X Generating the SHA1 fingerprint in Mac OS X is similar to you what you performed in Linux. Open the terminal and enter the command. It will show output similar to what we obtained in Linux. Note down the SHA1 fingerprint, which we will use later: keytool -list -v -keystore ~/.android/debug.keystore -alias androiddebugkey - storepass android -keypass android Registering your application to the Google Developer Console This is one of the most important steps in our process. Our application will not function without obtaining an API key from the Google Developer Console. Follow these steps one by one to obtain the API key: Open the Google Developer Console by visiting https://console.developers.google.com and click on the CREATE PROJECT button. A new dialog box appears. Give your project a name and a unique project ID. Then, click on Create: As soon as your project is created, you will be redirected to the Project dashboard. On the left-hand side, under the APIs & auth section, select APIs: Then, scroll down and enable Google Maps Android API v2: Next, under the same APIs & auth section, select Credentials. Select Create new Key under the Public API access, and then select Android key in the following dialog: In the next window, enter the SHA1 fingerprint we noted in our previous section followed by a semicolon and the package name of the Android application we wish to register. For example, my SHA1 fingerprint value is C9:44:2E:76:C4:C2:B7:64:79:78:46:FD:9A:83:B7:90:6D:75:94:33, and the package name of the app I wish to create is com.raj.map; so, I need to enter the following: C9:44:2E:76:C4:C2:B7:64:79:78:46:FD:9A:83:B7:90:6D:75:94:33;com.raj.map You need to enter the value shown in the following screen: Finally, click on Create. Now our Android application will be registered with the Google Developer Console and it will a display a screen similar to the following one: Note down the API key from the screen, which will be similar to this: AIzaSyAdJdnEG5vfo925VV2T9sNrPQ_rGgIGnEU Configuring Google Play services Google Play services includes the classes required for our map application. So, it is required to be set up properly. It differs for Eclipse with the ADT plugin and Gradle-based Android Studio. Let's see how to configure Google Play services for both separately; It is relatively simple. Android Studio Configuring Google Play Services with Android Studio is very simple. You need to add a line of code to your build.gradle file, which contains the Gradle build script required to build our project. There are two build.gradle files. You must add the code to the inner app's build.gradle file. The following screenshot shows the structure of the project: The code should be added to the second Gradle build file, which contains our app module's configuration. Add the following code to the dependencies section in the Gradle build file: compile 'com.google.android.gms:play-services:7.5.0 The structure should be similar to the following code: dependencies { compile 'com.google.android.gms:play-services:7.5.0' compile 'com.android.support:appcompat-v7:21.0.3' } The 7.5.0 in the code is the version number of Google Play services. Change the version number according to your current version. The current version can be found from the values.xml file present in the res/values directory of the Google Play services library project. The newest version of Google Play services can found at https://developers.google.com/android/guides/setup. That's it. Now resync your project. You can sync by navigating to Tools | Android | Sync Project with Gradle Files. Now, Google Play services will be integrated with your project. Eclipse Let's take a look at how to configure Google Play services in Eclipse with the ADT plugin. First, we need to import Google Play services into Workspace. Navigate to File | Import and the following window will appear: In the preceding screenshot, navigate to Android | Existing Android Code Into Workspace. Then click on Next. In the next window, browse the sdk/extras/google/google_play_services/libproject/google-play-services_lib directory root directory as shown in the following screenshot: Finally, click on Finish. Now, google-play-services_lib will be added to your Workspace. Next, let's take a look at how to configure Google Play services with our application project. Select your project, right-click on it, and select Properties. In the Library section, click on Add and choose google-play-services_lib. Then, click on OK. Now, google-play-services_lib will be added as a library to our application project as shown in the following screenshot: In the next section, we will see how to configure the API key and add permissions that will help us to deploy our application. Adding permissions and defining the API key The permissions and API key must be defined in the AndroidManifest.xml file, which provides essential information about applications in the operating system. The OpenGL ES version must be specified in the manifest file, which is required to render the map and also the Google Play services version. Adding permissions Three permissions are required for our map application to work properly. The permissions should be added inside the <manifest> element. The four permissions are as follows: INTERNET ACCESS_NETWORK_STATE WRITE_EXTERNAL_STORAGE READ_GSERVICES Let's take a look at what these permissions are for. INTERNET This permission is required for our application to gain access to the Internet. Since Google Maps mainly works on real-time Internet access, the Internet it is essential. ACCESS_NETWORK_STATE This permission gives information about a network and whether we are connected to a particular network or not. WRITE_EXTERNAL_STORAGE This permission is required to write data to an external storage. In our application, it is required to cache map data to the external storage. READ_GSERVICES This permission allows you to read Google services. The permissions are added to AndroidManifest.xml as follows: <uses-permission android_name="android.permission.INTERNET"/> <uses-permission android_name="android.permission.ACCESS_NETWORK_STATE"/> <uses-permission android_name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android_name="com.google.android.providers.gsf.permission.READ_GSERVICES" /> There are some more permissions that are currently not required. Specifying the Google Play services version The Google Play services version must be specified in the manifest file for the functioning of maps. It must be within the <application> element. Add the following code to AndroidManifest.xml: <meta-data android_name="com.google.android.gms.version" android_value="@integer/google_play_services_version" />   Specifying the OpenGL ES version 2 Android Google maps uses OpenGL to render a map. Google maps will not work on devices that do not support version 2 of OpenGL. Hence, it is necessary to specify the version in the manifest file. It must be added within the <manifest> element, similar to permissions. Add the following code to AndroidManifest.xml: <uses-feature android_glEsVersion="0x00020000" android_required="true"/> The preceding code specifies that version 2 of OpenGL is required for the functioning of our application. Defining the API key The Google maps API key is required to provide authorization to the Google maps service. It must be specified within the <application> element. Add the following code to AndroidManifest.xml: <meta-data android_name="com.google.android.maps.v2.API_KEY" android_value="API_KEY"/> The API_KEY value must be replaced with the API key we noted earlier from the Google Developer Console. The complete AndroidManifest structure after adding permissions, specifying OpenGL, the Google Play services version, and defining the API key is as follows: <?xml version="1.0" encoding="utf-8"?> <manifest package="com.raj.sampleapplication" android_versionCode="1" android_versionName="1.0" > <uses-feature android_glEsVersion="0x00020000" android_required="true"/> <uses-permission android_name="android.permission.INTERNET"/> <uses-permission android_name="android.permission.ACCESS_NETWORK_STATE"/> <uses-permission android_name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android_name="com.google.android.providers.gsf.permission.READ_GSERVICES" /> <application> <meta-data android_name="com.google.android.gms.version" android_value="@integer/google_play_services_version" /> <meta-data android_name="com.google.android.maps.v2.API_KEY" android_value="AIzaSyBVMWTLk4uKcXSHBJTzrxsrPNSjfL18lk0"/> </application> </manifest>   Summary In this article, we learned how to generate the SHA1 fingerprint in different platforms, registering our application in the Google Developer Console, and generating an API key. We also configured Google Play services in Android Studio and Eclipse and added permissions and other data in a manifest file that are essential to create a map application. Resources for Article: Further resources on this subject: Testing with the Android SDK [article] Signing an application in Android using Maven [article] Code Sharing Between iOS and Android [article]
Read more
  • 0
  • 0
  • 6271

article-image-and-now-something-extra
Packt
25 Aug 2015
9 min read
Save for later

And now for something extra

Packt
25 Aug 2015
9 min read
 In this article by Paul F. Johnson, author of the book Cross-platform UI Development with Xamarin.Forms, we'll look at how to add a custom renderer for Windows Phone in particular. (For more resources related to this topic, see here.) This article doesn't depend on anything because there is no requirement to have a Xamarin subscription; the Xamarin Forms library is available for free via NuGet. All you require is Visual Studio 2013 (or higher) running on Windows 8 (or higher—this is needed for the Windows Phone 8 emulator). Let's make a start Before we can create a custom renderer, we have to create something to render. In this case, we need to create a Xamarin Forms application. For this, create a new project in Visual Studio, as shown in the following screenshot: Selecting the OK button creates the project. Once the project is created, you will see the following screenshot on the right-hand side: In the preceding screenshot, there are four projects created: Portable (also known as the PCL—portable class library) Droid (Android 4.0.3 or higher) iOS (iOS 7 or higher) Windows Phone (8 or higher). By default, it is 8.0, but it can be set to 8.1 If we expand the WinPhone profile and examine References, we will see the following screenshot: Here, you can see that Xamarin.Forms is already installed. You can also see the link to the PCL at the bottom. Creating a button Buttons are available natively in Xamarin Forms. You can perform some very basic operations on a button (such as assign text, a Click event, and so on). When built, the platform will render their own version of Button. This is how the code looks: var button = new Button { Text = "Hello" }; button.Click += delegate {…}; For our purposes, we don't want a dull standard button, but we want a button that looks similar to the following image: We may also want to do something really different by having a button with both text and an image, where the image and text positions can look similar to the following image on either side: Creating the custom button The first part to creating the button is to create an empty class that inherits Button, as shown in the following code: using Xamarin.Forms; namespace CustomRenderer { public class NewButton : Button { public NewButton() { } } } As NewButton inherits Button, it will have all the properties and events that a standard Button has. Therefore, we can use the following code: var btnLogin = new NewButton() { Text = "Login", }; btnLogin.Clicked += delegate { if (!string.IsNullOrEmpty(txtUsername.Text) && !string.IsNullOrEmpty(txtPassword.Text)) LoginUser(txtUsername.Text, txtPassword.Text); }; However, the difference here is that as we will use something that inherits a class, we can use the default renderer or define our own renderer. The custom renderer To start with, we need to tell the platform that we will use a custom renderer as follows: [assembly: ExportRenderer(typeof(NewButton), typeof(NewButtonRenderer))] namespace WinPhone { class NewButtonRenderer : ButtonRenderer We start by saying that we will use a renderer on the NewButton object from the PCL with the NewButtonRenderer class. The class itself has to inherit ButtonRenderer that contains the code we need to create the renderer. The next part is to override OnElementChanged. This method is triggered when an element from within the object being worked on changes. Considerations for Windows Phone A prime consideration on Windows Phone is that the ViewRenderer base is actually a Canvas that has the control (in this case, a button) on it as a child. This is an advantage for us. If we clear the child from the canvas, the canvas can be manipulated, and the button can be added back. It is important to remember that we are dealing with two distinct entities, and each has its own properties. For example, the white rectangle that surrounds a Windows Phone button is part of the control, whereas the color and styling are part of the canvas, as shown in the following code: protected override void OnElementChanged(ElementChangedEventArgs<Xamarin.Forms.Button> e) { base.OnElementChanged(e); if (Control != null) { // clear the children of the canvas. We are not deleting the button. Children.Clear(); // create the new background var border = new Border { CornerRadius = new System.Windows.CornerRadius(10), Background = new SolidColorBrush(System.Windows.Media.Color.FromArgb(255, 130, 186, 132)), BorderBrush = new SolidColorBrush(System.Windows.Media.Color.FromArgb(255,45,176,51)), BorderThickness = new System.Windows.Thickness(0.8), Child = Control // this adds the control back to the border }; Control.Foreground = new SolidColorBrush(Colors.White); // make the text white Control.BorderThickness = new System.Windows.Thickness(0); // remove the button border that is always there Children.Add(border); // add the border to the canvas. Remember, this also contains the Control } } When compiled, the UI will give you a button, as shown in the following image: I'm sure you'll agree; it's much nicer than the standard Windows Phone button. The sound of music An image button is also fairly simple to create. Again, create a new Xamarin.Forms project in Visual Studio. Once created, as we did before, create a new empty class that inherits Button. Why is it empty? Unfortunately, it's not that simple to pass additional properties with a custom renderer, so to ensure an easier life, the class just inherits the base class, and anything else that is needed to go to the renderer is accessed through the pointer to app. Setting up the PCL code In the PCL, we will have the following code: App.text = "This is a cow"; App.filename = "cow.png"; App.onTheLeft = true; var btnComposite = new NewCompositeButton(){ }; Text, filename, and onTheLeft are defined in the App class and are accessed from the PCL using CompositeUI.App.filename (CompositeUI is the namespace I've used). The PCL is now set up, so the renderer is needed. The Windows Phone renderer As before, we need to tell the platform that we will use our own renderer and override the default OnElementChanged event, as shown in the following code: [assembly: ExportRenderer(typeof(NewCompositeButton), typeof(NewCompositeButtonRenderer))] namespace WinPhone { class NewCompositeButtonRenderer :ButtonRenderer { protected override void OnElementChanged(ElementChangedEventArgs<Xamarin.Forms.Button> e) { base.OnElementChanged(e); As with the first example, we will deal with a base class that is a Canvas with a single child. This child needs to be removed from the canvas before it can be manipulated as follows: Children.Clear(); Our next problem is that we have an image and text. Accessing the image It is recommended that images are kept either in the Assets directory in the project or in the dedicated Images directory. For my example, my image is in assets. To create the image, we need to create a bitmap image, set the source, and finally assign it to an image (for good measure, a small amount of padding is also added) as follows: var bitmap = new BitmapImage(); bitmap.SetSource(App.GetResourceStream(new Uri(@"Assets/"+CompositeUI.App.filename, UriKind.Relative)).Stream); var image = new System.Windows.Controls.Image { Source = bitmap, Margin = new System.Windows.Thickness(8,0,8,0) }; Adding the image to the button We now have a problem. If we add the image directly to the canvas, we can't specify whether it is on the left-hand side or on the right-hand side of the text. Moreover, how do you add the image to the canvas? Yes, you can use the child property, but this still leads to the issue of position. Thankfully, Windows Phone provides a StackPanel class. If you think of a stack panel as a set of ladders, you will quickly understand how it works. A ladder can be vertical or horizontal. If it's vertical, each object is directly before or after each other. If it is horizontal, each object is either at the left-hand side or the right-hand side of each other. With the Orientation property of a StackPanel class, we can create a horizontal or vertical ladder for whatever we need. In the case of the button, we want the Panel to be horizontal, as shown in the following code: var panel = new StackPanel { Orientation = Orientation.Horizontal, }; Then, we can set the text for the button and any other attributes: Control.Foreground = new SolidColorBrush(Colors.White); Control.BorderThickness = new System.Windows.Thickness(0); Control.Content = CompositeUI.App.text; Note that there isn't a Text property for the button on Windows Phone. Its equivalent is Content. Our next step is to decide which side the image goes on and add it to the panel, as shown in the following code: if (CompositeUI.App.onTheLeft) { panel.Children.Add(image); panel.Children.Add(Control); } else { panel.Children.Add(Control); panel.Children.Add(image); } We can now create the border and add the panel as the child: var border = new Border { CornerRadius = new System.Windows.CornerRadius(10), Background = new SolidColorBrush(System.Windows.Media.Color.FromArgb(255, 130, 186, 132)), BorderBrush = new SolidColorBrush(System.Windows.Media.Color.FromArgb(255, 45, 176, 51)), BorderThickness = new System.Windows.Thickness(0.8), Child = panel }; Lastly, add the border to the canvas: Children.Add(border); We now have a button with an image and text on it, as shown in the following image: This rendering technique can also be applied to Lists and anywhere else required. It's not difficult; it's just not as obvious as it really should be. Summary Creating styled buttons is certainly for the platform to work on, but the basics are there in the PCL. The code is not difficult to understand, and once you've used it a few times, you'll find that the styling buttons to create attractive user interfaces is not such as big effort. Xamarin Forms will always help you create your UI, but at the end of the day, it's only you who can make it stand out. Resources for Article: Further resources on this subject: Configuring Your Operating System[article] Heads up to MvvmCross[article] Code Sharing Between iOS and Android [article]
Read more
  • 0
  • 0
  • 6022
article-image-styling-user-interface
Packt
22 Nov 2013
8 min read
Save for later

Styling the User Interface

Packt
22 Nov 2013
8 min read
(For more resources related to this topic, see here.) Styling components versus themes Before we get into this article, it's important to have a good understanding of the difference between styling an individual component and creating a theme. Almost every display component in Sencha Touch has the option to set its own style. For example, a panel component can use a style in this way: { xtype: 'panel', style: 'border: none; font: 12px Arial black', html: 'Hello World' } The style can also be set as an object using: { xtype: 'panel', style : { 'border' : 'none', 'font' : '12px Arial black', 'border-left': '1px solid black' } html: 'Hello World' } You will notice that inside the style block, we have quoted both sides of the configuration setting. This is still the correct syntax for JavaScript and a very good habit to get in to for using style blocks. This is because a number of standard CSS styles use a dash as part of their name. If we do not add quotes to border-left, JavaScript will read this as border minus left and promptly collapse in a pile of errors. We can also set a style class for a component and use an external CSS file to define the class as follows: { xtype: 'panel', cls: 'myStyle', html: 'Hello World' } Your external CSS file could then control the style of the component in the following manner: .myStyle { border: none; font: 12px Arial black; } This class-based control of display is considered a best practice as it separates the style logic from the display logic. This means that when you need to change a border color, it can be done in one file instead of hunting through multiple files for individual style settings. These styling options are very useful for controlling the display of individual components. There are also certain style elements, such as border, padding, and margin, that can be set directly in the components' configuration: { xtype: 'panel', bodyMargin: '10 5 5 5', bodyBorder: '1px solid black', bodyPadding: 5, html: 'Hello World' } These configurations can accept either a number to be applied to all sides or a CSS string value, such as 1px solid black or 10 5 5 5. The number should be entered without quotes but the CSS string values need to be within quotes. These kind of small changes can be helpful in styling your application, but what if you need to do something a bit bigger? What if you want to change the color or appearance of the entire application? What if you want to create your own default style for your buttons? This is where themes and UI styles come into play. UI styling for toolbars and buttons Let's do a quick review of the basic MVC application we created, Creating a Simple Application, and use it to start our exploration of styles with toolbars and buttons. To begin, we are going to add a few things to the first panel, which has our titlebar, toolbar, and Hello World text. Adding the toolbar In app/views, you'll find Main.js. Go ahead and open that in your editor and takea look at the first panel in our items list: items: [ { title: 'Hello', iconCls: 'home', xtype: 'panel', html: 'Hello World', items: [ { xtype: 'titlebar', docked: 'top', title: 'About TouchStart' } ] }... We're going to add a second toolbar on top of the existing one. Locate the items section, and after the curly braces for our first toolbar, add the second toolbar in the following manner: { xtype: 'titlebar', docked: 'top', title: 'About TouchStart' }, { docked: 'top', xtype: 'toolbar', items: [ {text: 'My Button'} ]} Don't forget to add a comma between the two toolbars. Extra or missing commas While working in Sencha Touch, one of the most common causes of parse errors is an extra or missing comma. When you are moving the code around, always make sure you have accounted for any stray or missing commas. Fortunately for us, the Safari Error Console will usually give us a pretty good idea about the line number to look at for these types of parse errors. A more detailed list of common errors can be found at: http://javascript.about.com/od/reference/a/error.htm Now when you take a look at the first tab, you should see our new toolbar with our button to the left. Since the toolbars both have the same background, they are a bit difficult to differentiate. So, we are going to change the appearance of the bottom bar using the ui configuration option: { docked: 'top', xtype: 'toolbar', ui: 'light', items: [ {text: 'My Button'} ] } The ui configuration is the shorthand for a particular set of styles in Sencha Touch. There are several ui styles included with Sencha Touch, and later on, we will show you how to make your own. Styling buttons Buttons can also use the ui configuration setting, for which they offer several different options: normal: This is the default button back: This is a button with the left side narrowed to a point round: This is a more drastically rounded button small: This is a smaller button action: This is a brighter version of the default button (the color varies according to the active color of the theme, which we will see later) forward: This is a button with the right side narrowed to a point Buttons also have some color options built into the ui option. These color options are confirm and decline. These options are combined with the previous shape options using a hyphen; for example, confirm-small or decline-round. Let's add some new buttons and see how this looks on our screen. Locate the items list with our button in the second toolbar: items: [ {text: 'My Button'} ] Replace that old items list with the following new items list: items: [ { text: 'Back', ui: 'back' }, { text: 'Round', ui: 'round' }, { text: 'Small', ui: 'small' }, { text: 'Normal', ui: 'normal' }, { text: 'Action', ui: 'action' }, { text: 'Forward', ui: 'forward' } ] This will produce a series of buttons across the top of our toolbar. As you may notice, all of our buttons are aligned to the left. You can move buttons to the right by adding a spacer xtype in front of the buttons you want pushed to the right. Try this by adding the following between our Forward and Action buttons: { xtype: 'spacer'}, This will make the Forward button move over to the right-hand side of the toolbar: Since buttons can actually be used anywhere, we can add some to our title bar and use the align property to control where they appear. Modify the titlebar for our first panel and add an items section, as shown in the following code: { xtype: 'titlebar', docked: 'top', title: 'About TouchStart', items: [ { xtype: 'button', text: 'Left', align: 'left' }, { xtype: 'button', text: 'Right', align: 'right' } ] } Now we should have two buttons in our title bar, one on either side of the title: Let's also add some buttons to the panel container so we can see what the ui options confirm and decline look like. Locate the end of the items section of our HelloPanel container and add the following after the second toolbar: { xtype: 'button', text: 'Confirm', ui: 'confirm', width: 100 }, { xtype: 'button', text: 'Decline', ui: 'decline', width: 100 } There are two things you may notice that differentiate our panel buttons from our toolbar buttons. The first is that we declare xtype:'button' in our panel but we don't in our toolbar. This is because the toolbar assumes it will contain buttons and xtype only has to be declared if you use something other than a button. The panel does not set a default xtype attribute, so every item in the panel must declare one. The second difference is that we declare width for the buttons. If we don't declare width when we use a button in a panel, it will expand to the full width of the panel. On the toolbar, the button auto-sizes itself to fit the text. You will also see that our two buttons in the panel are mashed together. You can separate them out by adding margin: 5 to each of the button configuration sections. These simple styling options can help make your application easier to navigate and provide the user with visual clues for important or potentially destructive actions. The tab bar The tab bar at the bottom also understands the ui configuration option. In this case, the available options are light and dark. The tab bar also changes the icon appearance based on the ui option; a light toolbar will have dark icons and a dark toolbar will have light icons. These icons are actually part of a special font called Pictos. Sencha Touch started using the Pictos font in Version 2.2 instead of images icons because of compatibility issues on some mobile devices. The icon mask from previous versions of Sencha Touch is available but has been discontinued as of Version 2.2. You can see some of the icons available in the documentation for the Ext.Button component: http://docs.sencha.com/touch/2.2.0/#!/api/Ext.Button If you're curious about the Pictos font, you can learn more about it at http://pictos.cc/
Read more
  • 0
  • 0
  • 6008

article-image-applications-physics
Packt
18 Feb 2013
16 min read
Save for later

Applications of Physics

Packt
18 Feb 2013
16 min read
(For more resources related to this topic, see here.) Introduction to the Box2D physics extension Physics-based games are one of the most popular types of games available for mobile devices. AndEngine allows the creation of physics-based games with the Box2D extension. With this extension, we can construct any type of physically realistic 2D environment from small, simple simulations to complex games. In this recipe, we will create an activity that demonstrates a simple setup for utilizing the Box2D physics engine extension. Furthermore, we will use this activity for the remaining recipes in this article. Getting ready... First, create a new activity class named PhysicsApplication that extends BaseGameActivity and implements IAccelerationListener and IOnSceneTouchListener. How to do it... Follow these steps to build our PhysicsApplication activity class: Create the following variables in the class: public static int cameraWidth = 800; public static int cameraHeight = 480; public Scene mScene; public FixedStepPhysicsWorld mPhysicsWorld; public Body groundWallBody; public Body roofWallBody; public Body leftWallBody; public Body rightWallBody; We need to set up the foundation of our activity. To start doing so, place these four, common overridden methods in the class to set up the engine, resources, and the main scene: @Override public Engine onCreateEngine(final EngineOptions pEngineOptions) { return new FixedStepEngine(pEngineOptions, 60); } @Override public EngineOptions onCreateEngineOptions() { EngineOptions engineOptions = new EngineOptions(true, ScreenOrientation.LANDSCAPE_SENSOR, new FillResolutionPolicy(), new Camera(0,0, cameraWidth, cameraHeight)); engineOptions.getRenderOptions().setDithering(true); engineOptions.getRenderOptions(). getConfigChooserOptions() .setRequestedMultiSampling(true); engineOptions.setWakeLockOptions( WakeLockOptions.SCREEN_ON); return engineOptions; } @Override public void onCreateResources(OnCreateResourcesCallback pOnCreateResourcesCallback) { pOnCreateResourcesCallback. onCreateResourcesFinished(); } @Override public void onCreateScene(OnCreateSceneCallback pOnCreateSceneCallback) { mScene = new Scene(); mScene.setBackground(new Background(0.9f,0.9f,0.9f)); pOnCreateSceneCallback.onCreateSceneFinished(mScene); } Continue setting up the activity by adding the following overridden method, which will be used to populate our scene: @Override public void onPopulateScene(Scene pScene, OnPopulateSceneCallback pOnPopulateSceneCallback) { } Next, we will fill the previous method with the following code to create our PhysicsWorld object and Scene object: mPhysicsWorld = new FixedStepPhysicsWorld(60, new Vector2(0f,-SensorManager.GRAVITY_EARTH*2), false, 8, 3); mScene.registerUpdateHandler(mPhysicsWorld); final FixtureDef WALL_FIXTURE_DEF = PhysicsFactory.createFixtureDef(0, 0.1f, 0.5f); final Rectangle ground = new Rectangle(cameraWidth / 2f, 6f, cameraWidth - 4f, 8f, this.getVertexBufferObjectManager()); final Rectangle roof = new Rectangle(cameraWidth / 2f, cameraHeight – 6f, cameraWidth - 4f, 8f, this.getVertexBufferObjectManager()); final Rectangle left = new Rectangle(6f, cameraHeight / 2f, 8f, cameraHeight - 4f, this.getVertexBufferObjectManager()); final Rectangle right = new Rectangle(cameraWidth - 6f, cameraHeight / 2f, 8f, cameraHeight - 4f, this.getVertexBufferObjectManager()); ground.setColor(0f, 0f, 0f); roof.setColor(0f, 0f, 0f); left.setColor(0f, 0f, 0f); right.setColor(0f, 0f, 0f); groundWallBody = PhysicsFactory.createBoxBody( this.mPhysicsWorld, ground, BodyType.StaticBody, WALL_FIXTURE_DEF); roofWallBody = PhysicsFactory.createBoxBody( this.mPhysicsWorld, roof, BodyType.StaticBody, WALL_FIXTURE_DEF); leftWallBody = PhysicsFactory.createBoxBody( this.mPhysicsWorld, left, BodyType.StaticBody, WALL_FIXTURE_DEF); rightWallBody = PhysicsFactory.createBoxBody( this.mPhysicsWorld, right, BodyType.StaticBody, WALL_FIXTURE_DEF); this.mScene.attachChild(ground); this.mScene.attachChild(roof); this.mScene.attachChild(left); this.mScene.attachChild(right); // Further recipes in this chapter will require us to place code here. mScene.setOnSceneTouchListener(this); pOnPopulateSceneCallback.onPopulateSceneFinished(); The following overridden activities handle the scene touch events, the accelerometer input, and the two engine life cycle events—onResumeGame and onPauseGame. Place them at the end of the class to finish this recipe: @Override public boolean onSceneTouchEvent(Scene pScene, TouchEvent pSceneTouchEvent) { // Further recipes in this chapter will require us to place code here. return true; } @Override public void onAccelerationAccuracyChanged( AccelerationData pAccelerationData) {} @Override public void onAccelerationChanged( AccelerationData pAccelerationData) { final Vector2 gravity = Vector2Pool.obtain( pAccelerationData.getX(), pAccelerationData.getY()); this.mPhysicsWorld.setGravity(gravity); Vector2Pool.recycle(gravity); } @Override public void onResumeGame() { super.onResumeGame(); this.enableAccelerationSensor(this); } @Override public void onPauseGame() { super.onPauseGame(); this.disableAccelerationSensor(); } How it works... The first thing that we do is define a camera width and height. Then, we define a Scene object and a FixedStepPhysicsWorld object in which the physics simulations will take place. The last set of variables defines what will act as the borders for our physics-based scenes. In the second step, we override the onCreateEngine() method to return a FixedStepEngine object that will process 60 updates per second. The reason that we do this, while also using a FixedStepPhysicsWorld object, is to create a simulation that will be consistent across all devices, regardless of how efficiently a device can process the physics simulation. We then create the EngineOptions object with standard preferences, create the onCreateResources() method with only a simple callback, and set the main scene with a light-gray background. In the onPopulateScene() method, we create our FixedStepPhysicsWorld object that has double the gravity of the Earth, passed as an (x,y) coordinate Vector2 object, and will update 60 times per second. The gravity can be set to other values to make our simulations more realistic or 0 to create a zero gravity simulation. A gravity setting of 0 is useful for space simulations or for games that use a top-down camera view instead of a profile. The false Boolean parameter sets the AllowSleep property of the PhysicsWorld object, which tells PhysicsWorld to not let any bodies deactivate themselves after coming to a stop. The last two parameters of the FixedStepPhysicsWorld object tell the physics engine how many times to calculate velocity and position movements. Higher iterations will create simulations that are more accurate, but can cause lag or jitteriness because of the extra load on the processor. After creating the FixedStepPhysicsWorld object, we register it with the main scene as an update handler. The physics world will not run a simulation without being registered. The variable WALL_FIXTURE_DEF is a fixture definition. Fixture definitions hold the shape and material properties of entities that will be created within the physics world as fixtures. The shape of a fixture can be either circular or polygonal. The material of a fixture is defined by its density, elasticity, and friction, all of which are required when creating a fixture definition. Following the creation of the WALL_FIXTURE_DEF variable, we create four rectangles that will represent the locations of the wall bodies. A body in the Box2D physics world is made of fixtures. While only one fixture is necessary to create a body, multiple fixtures can create complex bodies with varying properties. Further along in the onPopulateScene() method, we create the box bodies that will act as our walls in the physics world. The rectangles that were previously created are passed to the bodies to define their position and shape. We then define the bodies as static, which means that they will not react to any forces in the physics simulation. Lastly, we pass the wall fixture definition to the bodies to complete their creation. After creating the bodies, we attach the rectangles to the main scene and set the scene's touch listener to our activity, which will be accessed by the onSceneTouchEvent() method. The final line of the onPopulateScene() method tells the engine that the scene is ready to be shown. The overridden onSceneTouchEvent() method will handle all touch interactions for our scene. The onAccelerationAccuracyChanged() and onAccelerationChanged() methods are inherited from the IAccelerationListener interface and allow us to change the gravity of our physics world when the device is tilted, rotated, or panned. We override onResumeGame() and onPauseGame() to keep the accelerometer from using unnecessary battery power when our game activity is not in the foreground. There's more... In the overridden onAccelerationChanged() method, we make two calls to the Vector2Pool class. The Vector2Pool class simply gives us a way of re-using our Vector2 objects that might otherwise require garbage collection by the system. On newer devices, the Android Garbage Collector has been streamlined to reduce noticeable hiccups, but older devices might still experience lag depending on how much memory the variables being garbage collected occupy. Visit http://www.box2d.org/manual.htmlto see the Box2D User Manual. The AndEngine Box2D extension is based on a Java port of the official Box2D C++ physics engine, so some variations in procedure exist, but the general concepts still apply. See also Understanding different body types in this article. Understanding different body types The Box2D physics world gives us the means to create different body types that allow us to control the physics simulation. We can generate dynamic bodies that react to forces and other bodies, static bodies that do not move, and kinematic bodies that move but are not affected by forces or other bodies. Choosing which type each body will be is vital to producing an accurate physics simulation. In this recipe, we will see how three bodies react to each other during collision, depending on their body types. Getting ready... Follow the recipe in the Introduction to the Box2D physics extension section given at the beginning of this article to create a new activity that will facilitate the creation of our bodies with varying body types. How to do it... Complete the following steps to see how specifying a body type for bodies affects them: First, insert the following fixture definition into the onPopulateScene() method: FixtureDef BoxBodyFixtureDef = PhysicsFactory.createFixtureDef(20f, 0f, 0.5f); Next, place the following code that creates three rectangles and their corresponding bodies after the fixture definition from the previous step: Rectangle staticRectangle = new Rectangle(cameraWidth / 2f,75f,400f,40f,this.getVertexBufferObjectManager()); staticRectangle.setColor(0.8f, 0f, 0f); mScene.attachChild(staticRectangle); PhysicsFactory.createBoxBody(mPhysicsWorld, staticRectangle, BodyType.StaticBody, BoxBodyFixtureDef); Rectangle dynamicRectangle = new Rectangle(400f, 120f, 40f, 40f, this.getVertexBufferObjectManager()); dynamicRectangle.setColor(0f, 0.8f, 0f); mScene.attachChild(dynamicRectangle); Body dynamicBody = PhysicsFactory.createBoxBody(mPhysicsWorld, dynamicRectangle, BodyType.DynamicBody, BoxBodyFixtureDef); mPhysicsWorld.registerPhysicsConnector(new PhysicsConnector( dynamicRectangle, dynamicBody); Rectangle kinematicRectangle = new Rectangle(600f, 100f, 40f, 40f, this.getVertexBufferObjectManager()); kinematicRectangle.setColor(0.8f, 0.8f, 0f); mScene.attachChild(kinematicRectangle); Body kinematicBody = PhysicsFactory.createBoxBody(mPhysicsWorld, kinematicRectangle, BodyType.KinematicBody, BoxBodyFixtureDef); mPhysicsWorld.registerPhysicsConnector(new PhysicsConnector( kinematicRectangle, kinematicBody); Lastly, add the following code after the definitions from the previous step to set the linear and angular velocities for our kinematic body: kinematicBody.setLinearVelocity(-2f, 0f); kinematicBody.setAngularVelocity((float) (-Math.PI)); How it works... In the first step, we create the BoxBodyFixtureDef fixture definition that we will use when creating our bodies in the second step. For more information on fixture definitions, see the Introduction to the Box2D physics extension recipe in this article. In step two, we first define the staticRectangle rectangle by calling the Rectangle constructor. We place staticRectangle at the position of cameraWidth / 2f, 75f, which is near the lower-center of the scene, and we set the rectangle to have a width of 400f and a height of 40f, which makes the rectangle into a long, flat bar. Then, we set the staticRectangle rectangle's color to be red by calling staticRectangle. setColor(0.8f, 0f, 0f). Lastly, for the staticRectangle rectangle, we attach it to the scene by calling the mScene.attachChild() method with staticRectangle as the parameter. Next, we create a body in the physics world that matches our staticRectangle. To do this, we call the PhysicsFactory.createBoxBody() method with the parameters of mPhysicsWorld, which is our physics world, staticRectangle to tell the box to be created with the same position and size as the staticRectangle rectangle, BodyType. StaticBody to define the body as static, and our BoxBodyFixtureDef fixture definition. Our next rectangle, dynamicRectangle, is created at the location of 400f and 120f, which is the middle of the scene slightly above the staticRectangle rectangle. Our dynamicRectangle rectangle's width and height are set to 40f to make it a small square. Then, we set its color to green by calling dynamicRectangle.setColor(0f, 0.8f, 0f) and attach it to our scene using mScene.attachChild(dynamicRectangle). Next, we create the dynamicBody variable using the PhysicsFactory.createBoxBody() method in the same way that we did for our staticRectangle rectangle. Notice that we set the dynamicBody variable to have BodyType of DynamicBody. This sets the body to be dynamic. Now, we register PhysicsConnector with the physics world to link dynamicRectangle and dynamicBody. A PhysicsConnecter class links an entity within our scene to a body in the physics world, representing the body's realtime position and rotation in our scene. Our last rectangle, kinematicRectangle, is created at the location of 600f and 100f, which places it on top of our staticRectangle rectangle toward the right-hand side of the scene. It is set to have a height and width of 40f, which makes it a small square like our dynamicRectangle rectangle. We then set the kinematicRectangle rectangle's color to yellow and attach it to our scene. Similar to the previous two bodies that we created, we call the PhysicsFactory.createBoxBody() method to create our kinematicBody variable. Take note that we create our kinematicBody variable with a BodyType type of KinematicBody. This sets it to be kinematic and thus moved only by the setting of its velocities. Lastly, we register a PhysicsConnector class between our kinematicRectangle rectangle and our kinematicBody body type. In the last step, we set our kinematicBody body's linear velocity by calling the setLinearVelocity() method with a vector of -2f on the x axis, which makes it move to the left. Finally, we set our kinematicBody body's angular velocity to negative pi by calling kinematicBody.setAngularVelocity((float) (-Math.PI)). For more information on setting a body's velocities, see the Using forces, velocities, and torque recipe in this article. There's more... Static bodies cannot move from applied or set forces, but can be relocated using the setTransform() method. However, we should avoid using the setTransform() method while a simulation is running, because it makes the simulation unstable and can cause some strange behaviors. Instead, if we want to change the position of a static body, we can do so whenever creating the simulation or, if we need to change the position at runtime, simply check that the new position will not cause the static body to overlap existing dynamic bodies or kinematic bodies. Kinematic bodies cannot have forces applied, but we can set their velocities via the setLinearVelocity() and setAngularVelocity() methods. See also Introduction to the Box2D physics extension in this article. Using forces, velocities, and torque in this article. Creating category-filtered bodies Depending on the type of physics simulation that we want to achieve, controlling which bodies are capable of colliding can be very beneficial. In Box2D, we can assign a category, and category-filter to fixtures to control which fixtures can interact. This recipe will cover the defining of two category-filtered fixtures that will be applied to bodies created by touching the scene to demonstrate category-filtering. Getting ready... Create an activity by following the steps in the Introduction to the Box2D physics extension section given at the beginning of the article. This activity will facilitate the creation of the category-filtered bodies used in this section. How to do it... Follow these steps to build our category-filtering demonstration activity: Define the following class-level variables within the activity: private int mBodyCount = 0; public static final short CATEGORYBIT_DEFAULT = 1; public static final short CATEGORYBIT_RED_BOX = 2; public static final short CATEGORYBIT_GREEN_BOX = 4; public static final short MASKBITS_RED_BOX = CATEGORYBIT_DEFAULT + CATEGORYBIT_RED_BOX; public static final short MASKBITS_GREEN_BOX = CATEGORYBIT_DEFAULT + CATEGORYBIT_GREEN_BOX; public static final FixtureDef RED_BOX_FIXTURE_DEF = PhysicsFactory.createFixtureDef(1, 0.5f, 0.5f, false, CATEGORYBIT_RED_BOX, MASKBITS_RED_BOX, (short)0); public static final FixtureDef GREEN_BOX_FIXTURE_DEF = PhysicsFactory.createFixtureDef(1, 0.5f, 0.5f, false, CATEGORYBIT_GREEN_BOX, MASKBITS_GREEN_BOX, (short)0); Next, create this method within the class that generates new category-filtered bodies at a given location: private void addBody(final float pX, final float pY) { this.mBodyCount++; final Rectangle rectangle = new Rectangle(pX, pY, 50f, 50f, this.getVertexBufferObjectManager()); rectangle.setAlpha(0.5f); final Body body; if(this.mBodyCount % 2 == 0) { rectangle.setColor(1f, 0f, 0f); body = PhysicsFactory.createBoxBody(this.mPhysicsWorld, rectangle, BodyType.DynamicBody, RED_FIXTURE_DEF); } else { rectangle.setColor(0f, 1f, 0f); body = PhysicsFactory.createBoxBody(this.mPhysicsWorld, rectangle, BodyType.DynamicBody, GREEN_FIXTURE_DEF); } this.mScene.attachChild(rectangle); this.mPhysicsWorld.registerPhysicsConnector(new PhysicsConnector( rectangle, body, true, true)); } Lastly, fill the body of the onSceneTouchEvent() method with the following code that calls the addBody() method by passing the touched location: if(this.mPhysicsWorld != null) if(pSceneTouchEvent.isActionDown()) this.addBody(pSceneTouchEvent.getX(), pSceneTouchEvent.getY()); How it works... In the first step, we create an integer, mBodyCount, which counts how many bodies we have added to the physics world. The mBodyCount integer is used in the second step to determine which color, and thus which category, should be assigned to the new body. We also create the CATEGORYBIT_DEFAULT, CATEGORYBIT_RED_BOX, and CATEGORYBIT_ GREEN_BOX category bits by defining them with unique power-of-two short integers and the MASKBITS_RED_BOX and MASKBITS_GREEN_BOX mask bits by adding their associated category bits together. The category bits are used to assign a category to a fixture, while the mask bits combine the different category bits to determine which categories a fixture can collide with. We then pass the category bits and mask bits to the fixture definitions to create fixtures that have category collision rules. The second step is a simple method that creates a rectangle and its corresponding body. The method takes the X and Y location parameters that we want to use to create a new body and passes them to a Rectangle object's constructor, to which we also pass a height and width of 50f and the activity's VertexBufferObjectManager. Then, we set the rectangle to be 50 percent transparent using the rectangle.setAlpha() method. After that, we define a body and modulate the mBodyCount variable by 2 to determine the color and fixture of every other created body. After determining the color and fixture, we assign them by setting the rectangle's color and creating a body by passing our mPhysicsWorld physics world, the rectangle, a dynamic body type, and the previously-determined fixture to use. Finally, we attach the rectangle to our scene and register a PhysicsConnector class to connect the rectangle to our body. The third step calls the addBody() method from step two only if the physics world has been created and only if the scene's TouchEvent is ActionDown. The parameters that are passed, pSceneTouchEvent.getX() and pSceneTouchEvent.getY(), represent the location on the scene that received a touch input, which is also the location where we want to create a new category-filtered body. There's more... The default category of all fixtures has a value of one. When creating mask bits for specific fixtures, remember that any combination that includes the default category will cause the fixture to collide with all other fixtures that are not masked to avoid collision with the fixture. See also Introduction to the Box2D physics extension in this article. Understanding different body types in this article.
Read more
  • 0
  • 0
  • 5978

article-image-cocos2d-iphone-handling-accelerometer-input-and-detecting-collisions
Packt
30 Dec 2010
4 min read
Save for later

Cocos2d for iPhone: Handling Accelerometer Input and Detecting Collisions

Packt
30 Dec 2010
4 min read
  Cocos2d for iPhone 0.99 Beginner's Guide Make mind-blowing 2D games for iPhone with this fast, flexible, and easy-to-use framework! A cool guide to learning cocos2d with iPhone to get you into the iPhone game industry quickly Learn all the aspects of cocos2d while building three different games Add a lot of trendy features such as particles and tilemaps to your games to captivate your players Full of illustrations, diagrams, and tips for building iPhone games, with clear step-by-step instructions and practical examples         Read more about this book       (For more resources on Cocos2d, see here.) Handling accelerometer input Now that we have our three basic elements roughly defined, let's focus on the player's interaction with the hero. We will start by allowing the player to move the hero using the device's built-in accelerometer. The accelerometer opens a new world of interaction between the user and the device, allowing for a lot of interesting uses. There are already lots of applications and games that use this feature in innovative ways. For example, in the game Rolando, players have to roll the main character around using the accelerometer, and here we will be doing something similar to that. The accelerometer gives you data of the current tilting of the device in the three axes, that is the x, y, and z axes. Depending on the game you are making, you will be needing all that data or just the values for one or two of the axis. Fortunately, Apple has made it very easy for developers to access the data provided by the accelerometer hardware. Time for action – moving your hero with the accelerometer We have to add a few lines of code in order to have our hero move. The end result will be the hero moving horizontally when the user tilts the device to the left or to the right. We will also do some checkovers, in order to avoid the hero moving out of the screen. The first step involved in using the accelerometer is to enable it for the CCLayers that want to receive its input. Add the following line of code to the GameLayer's init method: self.isAccelerometerEnabled = YES; Then we have to set the update interval for the accelerometer. This will set the interval at which the hardware delivers the data to the GameLayer. Add the following line afterwards: [[UIAccelerometer sharedAccelerometer] setUpdateInterval:(1.0 / 60)]; Now that our GameLayer is prepared to receive accelerometer input, we just have to implement the method that receives and handles it. The following is the method that receives the input and makes our hero move. Add it to the GameLayer class: - (void)accelerometer:(UIAccelerometer*)accelerometer didAccelerate:(UIAcceleration*)acceleration{ static float prevX=0, prevY=0; #define kFilterFactor 0.05f float accelX = (float) acceleration.x * kFilterFactor + (1- kFilterFactor)*prevX; prevX = accelX;//If the hero object exists, we use the calculated accelerometer values to move him if(hero) {//We calculate the speed it will have and constrain it so it doesn't move faster than he is allowed to float speed = -20 * -accelX; if(speed > hero.movementSpeed) speed = hero.movementSpeed; else if(speed < -hero.movementSpeed) speed = -hero.movementSpeed;//We also check that the hero won't go past the borders of the screen, if he would exit the screen we don't move it if((accelX >0 || hero.mySprite.position.x >hero.mySprite.textureRect.size.width / 2) && ( accelX <0 ||hero.mySprite.position.x <320- hero.mySprite.textureRect.size.width / 2)) [hero.mySprite setPosition:ccp(hero.mySprite.position.x +speed,hero.mySprite.position.y)]; }} Run the game now. You should be able to move the hero by tilting you device, as shown in the following screenshot: The iPhone simulator does not provide a way to simulate accelerometer input, so you won't be able to test it in there; it won't move at all. What just happened? Accelerometer input can be achieved quite easily and fast. In our example, we just used the x component of the accelerometer input to handle the hero's position. The accelX variable holds the current value of the x component. This value ranges from -1 to 1, so we multiply it by the speed of the hero, then we just add that result to the current position of the hero, giving the sense of motion. We are also checking to make sure that the hero is not going off the screen before applying that movement.
Read more
  • 0
  • 0
  • 5882
article-image-social-networks
Packt
29 Oct 2013
15 min read
Save for later

Social Networks

Packt
29 Oct 2013
15 min read
(For more resources related to this topic, see here.) One window to rule them all Since we want to give our users the ability to post their messages on multiple social networks at once, it makes perfect sense to keep our whole application in a single window. It will be comprised of the following sections: The top section of the window will contain labels and a text area for message input. The text area will be limited to 140 characters in order to comply with Twitter's message limitation. As the user types his or her message, a label showing the number of characters will be updated. The bottom section of the window will use an encapsulating view that will contain multiple image views. Each image view will represent a social network to which the application will post the messages (in our case, Twitter and Facebook). But we can easily add more networks that will have their representation in this section. Each image view acts as a toggle button in order to select if the message will be sent to a particular social network or not. Finally, in the middle of those two sections, we will add a single button that will be used to publish the message from the text area. Code while it is hot! With our design in hand, we can now move on to create our user interface. Our entire application will be contained in a single window, so this is the first thing we will create. We will give it a title as well as a linear background gradient that changes from purple at the top to black at the bottom of the screen. Since we will add other components to it, we will keep its reference in the win variable: var win = Ti.UI.createWindow({ title: 'Unified Status', backgroundGradient: { type: 'linear', startPoint: { x: '0%', y: '0%' }, endPoint: { x: '0%', y: '100%' }, colors: [ { color: '#813eba'}, { color: '#000' } ] } }); The top section We will then add a new white label at the very top of the window. It will span 90% of the window's width, have a bigger font than other labels, and will have its text aligned to the left. Since this label will never be accessed later on, we will invoke our createLabel function right into the add function of the window object: win.add(Ti.UI.createLabel({ text: 'Post a message', color: '#fff', top: 4, width: '90%', textAlign: Ti.UI.TEXT_ALIGNMENT_LEFT, font: { fontSize: '22sp' } })); We will now move on to create our text area where our users will enter their messages. It will be placed right under the label previously created, occupy 90% of the screen's width, have a slightly bigger font, and have a thick, rounded, dark border. It will also be limited to 140 characters. It is also important that we don't forget to add this newly created object to our main window: var txtStatus = Ti.UI.createTextArea({ top: 37, width: '90%', height: 100, color: '#000', maxLength: 140, borderWidth: 3, borderRadius: 4, borderColor: '#401b60', font: { fontSize: '16sp' } }); win.add(txtStatus); The second label that will complement our text area will be used to indicate how many characters are currently present in the user's message. So, we will now create a label just underneath the text area and assign it a default text value. It will span 90% of the screen's width and have its text aligned to the right. Since this label will be updated dynamically when the text area's value changes, we will keep its reference in a variable named lblCount. As we did with our previous UI components, we will add our label to our main window using the following code: var lblCount = Ti.UI.createLabel({ text: '0/140', top: 134, width: '90%', color: '#fff', textAlign: Ti.UI.TEXT_ALIGNMENT_RIGHT }); win.add(lblCount); The last control from the top section will be our Post button. It will be placed right under the text area and centered horizontally using the following code: var btnPost = Ti.UI.createButton({ title: 'Post', top: 140, width: 150 }); win.add(btnPost); By not specifying any left or right property, the component is automatically centered. This is pretty useful to keep in mind while designing our user interfaces, as it frees us from having to do calculations in order to center something on the screen. Staying within the limits Even though the text area's maxLength property will ensure that the message length will not exceed the limitation we have set, we need to give our users feedback as they are typing their message. To achieve this, we will add an event listener on the change event of our text area. Every time; the text area's content changes, we will update the lblCount label with the number of characters our message contains: txtStatus.addEventListener('change', function(e) { lblCount.text = e.value.length + '/140'; We will also add a condition to check if our user's message is close to reaching its limit. If that is the case, we will change the label's color to red, if not, it will return to its original color: if (e.value.length > 120) { lblCount.color = 'red'; } else { lblCount.color = 'white' } Our last conditional check will be enabling the Post button only if there is actually a message to be posted. This will prevent our users from posting empty messages: btnPost.enabled = !(e.value.length === 0); }); Setting up our Post button Probably the most essential component in our application would be the Post button. We won't be able to post anything online (yet) by clicking on it, as there are things we will still need to add. We will add an event listener for the click event on the Post button. If the on-screen keyboard is displayed, we will call the blur function of the text area in order to hide it: btnPost.addEventListener('click', function() { txtStatus.blur(); Also, we will reset the value to the text area and the character count on the label, so that the interface is ready to enter a new message: txtStatus.value = ''; lblCount.text = '0/140'; }); The bottom section With all our message input mechanisms in place, we will now move on to our bottom section. All components from this section will be contained in a view that will be placed at the bottom of the screen. It will span 90% of the screen's width, and its height will adapt to its content. We will store its reference in a variable named bottomView for later use: var bottomView = Ti.UI.createView({ bottom: 4, width: '90%', height: Ti.UI.SIZE }); We will then create our toggle switches for each social network that our application interacts with. Since we want something sexier than regular switch components, we will create our own switch using a regular image view. Our first image view will be used to toggle the use of Facebook. It will have a dark blue background (similar to Facebook's logo) and will have a background image representing Facebook's logo with a gray background, so that it appears disabled. It will be positioned to the left of its parent view, and will have a borderRadius value of 4 in order to give it a rounded aspect. We will keep its reference in the fbView variable and then add this same image view to our bottom view using the following code: var fbView = Ti.UI.createImageView({ backgroundColor: '#3B5998', image: 'images/fb-logo-disabled.png', borderRadius: 4, width: 100, left: 10, height: 100 }); bottomView.add(fbView); We will create a similar image view (in almost every way), but this time for Twitter. So the background color and the image will be different. Also, it will be positioned to the right of our container view. We will store its reference in the twitView variable and then add it to our bottom view using the following code: var twitView = Ti.UI.createImageView({ backgroundColor: '#9AE4E8', image: 'images/twitter-logo-disabled.png', borderRadius: 4, width: 100, right: 10, height: 100 }); bottomView.add(twitView); Last but not the least, it is imperative that we do not forget to add our bottomView container object into our window. Also, we will open the window so that our users can interact with it using the following code: win.add(bottomView); win.open(); What if the user rotates the device? At this stage, if our user were to rotate the device (to landscape), nothing would happen on the screen. The reason behind this is because we have not taken any action to make our application compatible with the landscape mode. In many cases, this would require some changes to how the user interface is created depending on the orientation. But since our application is fairly simple, and most of our layout relies on percentages, we can activate the landscape mode without any modification to our code. To activate the landscape mode, we will update the orientations section from our tiapp.xml configuration file. It is mandatory to have at least one orientation present in this section (it doesn't matter which one it is). We want our users to be able to use the application, no matter how they hold their device: <iphone> <orientations device="iphone"> <orientation>Ti.UI.PORTRAIT</orientation> <orientation>Ti.UI.UPSIDE_PORTRAIT</orientation> <orientation>Ti.UI.LANDSCAPE_LEFT</orientation> <orientation>Ti.UI.LANDSCAPE_RIGHT</orientation> </orientations> </iphone> By default, no changes are required for Android applications since the default behavior supports orientation changes. There are, of course, ways to limit orientation in the manifest section, but this subject falls out of this article's scope. See it in action We have now implemented all the basic user interface. We can now test it and see if it behaves as we anticipated. We will click on the Run button from the App Explorer tab, as we did many times before. We now have our text area at the top with our two big images views at the bottom, each with the social network's logo. We can already test the message entry (with the character counter incrementing) by clicking on the Post button. Now if we rotate the iOS simulator (or the Android emulator), we can see that our layout adapts well to the landscape mode. To rotate the iOS simulator, you need to use the Hardware menu or you can use Cmd-Left and Cmd-Right on the keyboard. If you are using the Android emulator, there is no menu, but you can change the orientation using Ctrl + F12. The reason this all fits so well is because most of our dimensions are done using percentages. That means that our components will adapt and use the available space on the screen. Also, we positioned the bottom view using the bottom property; which meant that it will stick to the bottom of the screen, no matter how tall it is. There is a module for that Since we don't want to interact with the social network through manual asynchronous HTTP requests, we will leverage the Facebook native module provided with Titanium. Since it comes with Titanium SDK, there is no need to download or copy any file. All we need to do is add a reference (one for each target platform), in the modules section of our tiapp.xml file as follows: <modules> <module platform="android">facebook</module> <module platform="iphone">facebook</module> </modules> Also, we need to add the following property at the end of our configuration file, and replace the FACEBOOK_APPID parameter with the application ID that was provided when we created our app online. <property name="ti.facebook.appid">[FACEBOOK_APPID]</property> Why do we have to reference a module even though it comes bundled with the Titanium framework? Mostly, it is to avoid framework bloat; it is safe to assume that most applications developed using Titanium won't require interaction with Facebook. This is the reason for having it in a separate module that can be loaded on demand. Linking our mobile app to our Facebook app With our Facebook module loaded, we will now populate the necessary properties before making any call to the network. We will need to set the appid property in our code; since we have already defined it as a property in our tiapp.xml file, we can access it through the Properties API. This is a neat way to externalize application parameters, thus preventing us to hardcode them in our JavaScript code: fb.appid = Ti.App.Properties.getString('ti.facebook.appid'); We will also set the proper permissions that we will need while interacting with the server. (In this case, we only want to publish messages on the user's wall): fb.permissions = ['publish_actions']; It is important to set the appid and permissions properties before calling the authorize function. This makes sense since we want Facebook to authorize our application with a defined set of permissions from the get-go. Allowing our user to log in and log out at the click of a button We want our users to be able to connect (and disconnect) at their will from one social network, just by pressing the same view on the screen. To achieve this, we will create a function called toggleFacebook that will have a single parameter: function toggleFacebook(isActive) { If we want the function to make the service active, then we will verify if the user is already logged in to Facebook. If not, we will ask Facebook to authorize the application using the function with the same name. If the parameter indicates that we want to make the service inactive, we will log out from Facebook altogether: if (isActive) { if (!fb.loggedin) { fb.authorize(); } } else { fb.logout(); } } Now, all that we need to do is create an event listener on the click event for our Facebook image view and simply toggle between the two states depending whether the user is logged in or not: fbView.addEventListener('click', function() { toggleFacebook(!fb.loggedIn); }); The authorize function prompts the user to log in (if he or she is not already logged in), and authorize his or her application. It kind of makes sense that Facebook requires user validation before delegating the right to post something on his or her behalf. Handling responses from Facebook We have now completely implemented the Facebook login/logout mechanism into our application, but we still need to provide some feedback to our users. The Facebook module provides two event listeners that allow us to track when our user will have logged in or out. In the login event listener, we will check if the user is logged in successfully. If he or she did, we will update the image view's image property with the colored logo. If there was any error during authentication, or if the operation was simply cancelled, we will show an alert, as given in the following code: fb.addEventListener('login', function(e) { if (e.success) { fbView.image = 'images/fb-logo.png'; } else if (e.error) { alert(e.error); } else if (e.cancelled) { alert("Canceled"); } }); In the logout event listener, we will update the image view's image property with the grayed out Facebook logo using the following code: fb.addEventListener('logout', function(e) { fbView.image = 'images/fb-logo-disabled.png'; }); Posting our message on Facebook Since we are now connected to Facebook, and our application is authorized to post, we can now post our messages on this particular social network. To do this, we will create a new function named postFacebookMessage, with a single parameter, and that will be the message string to be posted: function postFacebookMessage(msg) { Inside this function, we will call the requestWithGraphPath function from the Facebook native module. This function can look fairly complex at first glance, but we will go over each parameter in detail. The parameters are as follows: The Graph API path requested (My feed). A dictionary object containing all of the properties required by the call (just the message). The HTTP method used for this call (POST). The callback function invoked when the request completes (this function simply checks the result from the call. In case of error, an alert is displayed). fb.requestWithGraphPath('me/feed', { message: msg }, "POST", function(e) { if (e.success) { Ti.API.info("Success! " + e.result); } else { if (e.error) { alert(e.error); } else { alert("Unknown result"); } } } ); } We will then update the click event handler for the Post button and call the postFacebookMessage function if the user is logged in to Facebook: btnPost.addEventListener('click', function() { if (fb.loggedIn) { postFacebookMessage(txtStatus.value); } ... }); With this, our application can post messages on our user's Facebook wall. Summary In this article, we learned how to create server-side applications on two popular social networking websites. We learned how to interact with those networks in terms of API and authentication. We also learned how to handle device rotation as well as use the native platform setting Windows. Finally, we covered Titanium menus and activities. Resources for Article: Further resources on this subject: Appcelerator Titanium: Creating Animations, Transformations, and Understanding Drag-and-drop [Article] augmentedTi: The application architecture [Article] Basic Security Approaches [Article]
Read more
  • 0
  • 0
  • 5839

article-image-mobile-application-development-ibm-worklight
Packt
18 Feb 2014
13 min read
Save for later

Mobile application development with IBM Worklight

Packt
18 Feb 2014
13 min read
(For more resources related to this topic, see here.) The mobile industry is evolving rapidly with an increasing number of mobile devices, such as smartphones and tablets. People are accessing more services from mobile devices than ever before. Mobile solutions are directly impacting businesses, organizations, and their growing number of customers and partners. Even employees now expect to access services on a mobile river. Several approaches currently exist for mobile application development; which include: Web Development: Uses open web (HTML5, JavaScript) client programming modules. Hybrid Development: The app source code consists of web code executed within a native container that is provided by Worklight and consist of native libraries. Hybrid Mixed: The developer uses arguments in the web code with native language to create unique features and access native APIs that are not yet available via JavaScript, such as AR, NFC, and others. Native Development: In this approach, the application is developed using native languages or transcoded into a native language via MAP tool's native appearance device capabilities, and performance. To achieve a similar application in different platforms, requires a different level of expertise hitting the cost, time, and complexity. The preceding list outlines the major aspects of the development approaches. Reviewing this list can help you choose which development approach is correct for your particular mobile application. The IBM Worklight solution In 2012, IBM acquired its very first set of mobile development and integration tools called IBM Worklight, which allows organizations to transform their business and deliver mobile solutions to their customers. IBM Worklight provides a truly open approach for developers to build an application and run it across multiple mobile platforms without having to port it for each environment, that is, Apple iOS, Google Android, Blackberry, and Microsoft Windows Phone. IBM Worklight also makes the developer's life easier by using standard technologies such as HTML5 and JavaScript with extensions for popular libraries such as jQuery Mobile, Dojo Toolkit, and Sencha Touch. IBM Worklight is a mobile application platform containing all of the tools needed to develop a mobile application. If we combine IBM Worklight components into a stream, it would be clean to say that hybrid mobile application development is tightly coupled with a baseline. Every specified component provides a bundle of functionalities and supports. Here is the lifecycle of Mobile Application Development; Worklight Studio: IBM Worklight provides a robust development Eclipse based environment called Worklight Studio that allow developers to quickly construct mobile application for multiple operating platforms. Worklight Server: This component is a runtime server that activates or enables secure data transmission through centralized back-end connectivity with adapters, used for offline encrypted storage, unified Push Notification and many more. Worklight Device Runtime: The device runtime provides a rich set of APIs that are cross-platform in nature and offer easy access to the services provided by the IBM Worklight Server. Worklight Console: This is a web dependent interface for real-time analytics, Managing Push Notification Authority and Mobile Version Management. The Worklight Console is a web-based interface and dedicated for ongoing administration of Worklight Server and its deployed apps, adapters and push notification services. Worklight Application Center: It's a cross-platform Mobile Application Store which pretends specific needs for Mobile Application Development team. There is a big advantage for using Worklight to creating user interface and that reflects on development at client side as well as server side. In general developer faces problems during development and support for creation of hybrid app by using other product, are typical not straight to define the use cases, debugging, preview testing for enterprise application but using a Worklight, developer can make simple architecture and enhanced improvised structures are amend to have mobile application. Creating a Simple IBM Worklight Application Let's start by creating a simple HelloWorld Worklight project; The steps described for creating an app are similar for IBM Worklight Studio and Eclipse IDE. The following is what you'll need to do: Start IBM Worklight Studio. Navigate to File| New and select Worklight Project, as shown in the following screenshot: Create New Worklight Project In the dialog that is displayed in the following screenshot, select Hybrid Application as the type of application defined in project templates, enter HelloWorld as the name of the first mobile project, and click on Next. Asked for Worklight Project Name and Project Type. You will see another dialog for Hybrid Application. In Application name, provide HelloWorld as the name of the application. Leave the checkboxes unchecked for now; these are used to extend supported JavaScript libraries into the app. Click on Finish. Define Worklight App Name in the window After clicking on Finish, you will see your project has been created from design perspective in Project Explorer, as shown in the following screenshot: Project Explore after complete Wizard. Adding an environment We have covered IBM Worklight Studio features and what they offer developers. It's time to see how this tool and plugin will make your life even easier. The cross platform development feature is a great deal to implement. It provides you with the means to achieve cross-development environment without any hurdles and with just a few clicks within its efficient interface. To add an environment for Android, iPhone, or any other platform, right-click on the Apps folder next to the adapters and navigate to New| Worklight Environment. You will see that a dialog box appears with checkboxes for currently supported environments, which you need to create an application for. The following screenshot illustrates this feature—we're adding an Android environment for this application: Worklight Environment selection window IBM Worklight Client Side API In this article, you will learn how the IBM Worklight client-side API can improve mobile application development. You will also see the IBM Worklight server side API improve client/server integration and communication between mobile applications and back end systems. The IBM Worklight client-side API allows mobile applications to access most of the features that are available in IBM Worklight during runtime, in order to get access to some defined libraries that appear to be bundled into the mobile application. Integration of the libraries for your mobile application using Worklight Server is used to access predefined communication interfaces. These libraries also offer unified access to native device features, which streamlines application development. The IBM Worklight client-side API contains hybrid, native, mixed hybrid, and web-based APIs. Besides, it extends those of these APIs that are responsible for supporting every mobile development framework. The development framework for a mobile application is used to improve security including custom and built-in authentication mechanisms for IBM Worklight provided by client-side API modules. It provides a semantic connection between web technologies such as HTML5, CSS3, and JavaScript with native functions that are available for different mobile platforms. Exploring Dojo Mobile Regarding the Dojo UI framework, you'll learn about Dojo Mobile in detail. Dojo Mobile, an extension for Dojo Toolkit, provides a series of widgets, or components, optimized for use on a mobile device, such as a smartphone or tablet. The Dojo framework is an extension of JavaScript and provides a built-in library which contains custom components such as text fields, validation menus, and image galleries. The components are modelled on their native counterparts and will look and feel native to those familiar with smartphone applications. The components are completely customizable using themes that let you make various customizations, such as pushing different sets of styles to iOS and Android users. Authentication and Security Modules Worklight has built-in authentication framework that allows developer to configure and use it with very little effort. The Worklight project has an authentication configuration file, which is used to declare and force security on mobile application, adapters, data and web resources which consist following security entities. We will talk about the various pre-defined authentication realms and security tests that are provided in Worklight out-of-box. To identify the importance of Mobile security you can see that in today's life we keep our personal and business data on mobile devices. The data and applications are both important to us. Both the data and applications should be protected against unauthorized access, particularly if they contain sensitive information or transmitting over the network. There are number of ways via a device can be compromised and it can leak data to malicious users. Worklight security principles, concepts and terminology IBM Worklight provides various security roles to protect applications, adapter procedures, and static resources from an unauthorized access. Each role can be defined by a security test that comprises one or more authentication realms. The authentication realm defines a process that will be used to authenticate the users. The authentication realm has the following parts: Challenge handler: This is a component on the device side Authenticator and login module: This is a component on the server One authentication realm can be used to protect multiple resources. We will look into each component in detail. Device Request Flow: The following screenshot shows a device that makes a request to access a protected resource, for example, an adapter function, on the server. In response to the request, the server sends back an authentication challenge to the device to submit its authenticity: Request/Response flow between Worklight application and enterprise server diagram> Push notification Mobile OS vendors such as Apple, Google, Microsoft, and others provide a free of cost feature through which a message can be delivered to any device running on the respective OS. The OS vendors send a message commonly known as a push message to a device for a particular app. It is not required for an app to be running in order to receive a push message. A push message can contain the following: Alerts: These would appear in the form of text messages Badges: These are small, circular marks on the app icon Sounds: These are audio alerts Alerts - Text Messages Messages will appear in the notification center (for iOS) and notification bar (for Android). IBM Worklight provides a unified push notification architecture that simplifies sending push messages across multiple devices running on different platforms. It provides a central management console to manage mobile vendor services, for example, APNS and GCM, in the background. Worklight provides the following push notification benefits: Easy to use: Users can easily subscribe and unsubscribe to a push service Quick message delivery: The push message gets delivered to a user's device even if the app is currently not running on the device Message feedback: It is possible to send feedback whenever a user receives and reads a push message Cordova Plugins A Cordova plugin is an open source, cross-platform mobile development architecture that allows the creation of multiplatform-deployable mobile apps. These apps can access native component features of devices using an API having web technologies such as HTML 5, JavaScript, and CSS 3. Apache Cordova Plugins are integrated into IBM Worklight Android and iOS projects. In this article, we will describe how Apache Cordova leverages the ability to merge the JavaScript interface as a wrapper on the web side in a native container with the device native interface on the mobile device platform. The most critical aspect of Cordova plugins is to deal with the native functionalities such as camera, bar code scanning, contacts list, and many other native features, currently running on multiple platforms. JavaScript doesn't provide such extensibility to enhance the scripting with respect to the native devices. In order to have a native feature's accessibility, we provide a library corresponding to the device's native feature so that JavaScript can communicate through it. When the need arises for a web page to execute the native feature functionality, the following points of access are available: The scenario has to be implemented in platform-specific manner, for example, in Android, iOS, or any other device In order to handle requests and responses between web pages and native pages, we need to communicate to/from web and native pages that are encrypted. By selecting the first option from the preceding list, we would find ourselves implementing and developing platform-dependent mobile applications. As we are in need of implementing mobile applications for a cross-platform mobile, and because it leads to provide cost-ineffective solutions, it is not a wise choice for Enterprise Mobile Development Solutions. It seems to be a really poor extensible for future enhancements and needs. Encrypted Offline Cache Encrypted Offline Cache (EOC) is the mechanism that is used for storing the repeated and the most sensitive data, which is used in the client's application. Encrypted Offline Cache is precisely known as EOC. It permits a flexible on-device data storage procedure for Android, iOS, BlackBerry, and Windows. This procedure provides a better alternative to the user for storing the manipulated data or the fetched response using the defined adapter data when offline and synchronizing the data for the usage of the server, which provides modifications that were completely developed when offline or without Internet connectivity. In order to dedicatedly create any mobile application for multiple platforms such as iOS and Android, consider using JSONStore rather than EOC. It seems to be much more practical to implement and is supposed to be the best practices of IBM. The JSONStore provides a mechanism to ease cryptographic procedures for encrypting forms and implementing security. PBKDF2 is a key derivation function that would act as the password to access encrypted data, which would be provided by the user. HTML5 cache can be used in EOC, which is not guaranteed to be persistent and is not a proper solution for the future updated versions of iOS. Storage JSONStore The local replica is a JSONStore. IBM Worklight delivers an API which do its work with a JSON Store consuming the class WL.JSONStore using the JavaScript defined method. You can generate an application that endures a local storage manipulated data with data copy and thrusts the local updates to a back-end provisioned service. Nearly every single method or process which is delivered in API for retrieving synchronized data to activate the native copy of the defined data that is kept on the client application or on-device. By means of the JSONStore API, you can encompass the functionality of existing adapter connectivity model to store data locally and impulse modifications from the client to a server. You can pursuit the local data storage and apprise or delete data within it. It can be used to protect the local data store by using password-based encryption. Summary In this article, we have discussed modern world mobile development techniques using IBM Worklight which surely allows an easy ,integrated, and secure enterprise mobile application with respect to time and development efforts. Beside it, most of the key functional areas have been covered including IBM Worklight components, mobile cross-platform environment handling and authentication, push notifications, Dojo mobile framework, and Encrypted Cache for Offline storage. IBM Worklight has the most diverse mechanism to enhance the mobile application functionalities with a more optimum and efficient way. This article also completely concludes the mobile application development techniques and features, by using which enterprise mobile app development will no longer be disquiet. Resources for Article: Further resources on this subject: Creating and configuring a basic mobile application [Article] Viewing on Mobile Devices [Article] Creating mobile friendly themes [Article]
Read more
  • 0
  • 0
  • 5831
Modal Close icon
Modal Close icon