Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-spatial-analysis
Packt
07 Jul 2016
21 min read
Save for later

Spatial Analysis

Packt
07 Jul 2016
21 min read
In this article by Ron Vincent author of the book Learning ArcGIS Runtime SDK for .NET, we're going to learn about spatial analysis with ArcGIS Runtime. As with other parts of ArcGIS Runtime, we really need to understand how spatial analysis is set up and executed with ArcGIS Desktop/Pro and ArcGIS Server. As a result, we will first learn about spatial analysis within the context of geoprocessing. Geoprocessing is the workhorse of doing spatial analysis with Esri's technology. Geoprocessing is very similar to how you write code; in that you specify some input data, do some work on that input data, and then produce the desired output. The big difference is that you use tools that come with ArcGIS Desktop or Pro. In this article, we're going to learn how to use these tools, and how to specify their input, output, and other parameters from an ArcGIS Runtime app that goes well beyond what's available in the GeometryEngine tool. In summary, we're going to cover the following topics: Introduction to spatial analysis Introduction to geoprocessing Preparing for geoprocessing Using geoprocessing in runtime Online geoprocessing (For more resources related to this topic, see here.) Introducing spatial analysis Spatial analysis is a broad term that can mean many different things, depending on the kind of study to be undertaken, the tools to be used, and the methods of performing the analysis, and is even subject to the dynamics of the individuals involved in the analysis. In this section, we will look broadly at the kinds of analysis that are possible, so that you have some context as to what is possible with the ArcGIS platform. Spatial analysis can be divided into these five broad categories: Point patterns Surface analysis Areal data Interactivity Networks Point pattern analysis is the evaluation of the pattern or distribution of points in space. With ArcGIS, you can analyze point data using average nearest neighbor, central feature, mean center, and so on. For surface analysis, you can create surface models, and then analyze them using tools such as LOS, slope surfaces, viewsheds, and contours. With areal data (polygons), you can perform hotspot analysis, spatial autocorrelation, grouping analysis, and so on. When it comes to modeling interactivity, you can use tools in ArcGIS that allow you to do gravity modeling, location-allocation, and so on. Lastly, with Esri's technology you can analyze networks, such as finding the shortest path, generating drive-time polygons, origin-destination matrices, and many other examples. ArcGIS provides the ability to perform all of these kinds of analysis using a variety of tools. For example, here the areas in green are visible from the tallest building. Areas in red are not visible: This article will deal with what is important to understand is that the ArcGIS platform has the capability to help solve problems such as these: An epidemiologist collects data on a disease, such as Chronic Obstructive Pulmonary Disease (COPD), and wants to know where it occurs and whether there are any statistically significant clusters so that a mitigation plan can be developed A mining geologist wants to obtain samples of a precious mineral so that he/she can estimate the overall concentration of the mineral A military analyst or soldier wants to know where they can be located in the battlefield and not been seen A crime analyst wants to know where crimes are concentrated so that they can increase police presence as a deterrent A research scientist wants to develop a model to predict the path of a fire There are many more examples. With ArcGIS Desktop and Pro, along with the correct extension, questions can be posed and answered using a variety of techniques. However, it's important to understand that ArcGIS Runtime may or may not be a good fit and may or may not support certain tools. In many cases, spatial analysis would be best studied with ArcGIS Desktop or Pro. For example, if you plan to conduct hotspot analysis on patients or crime, doing this kind of operation with Desktop or Pro is best suited because it's typically something you do once. On the other hand, if you plan to allow users to repeat this process again and again with different data, and you need high performance, building a tool with ArcGIS Runtime will be the perfect solution, especially if they need to run the tool in the field. It should also be noted that, in some cases, the ArcGIS JavaScript API will also be better suited. Introducing geoprocessing If you open up the Geoprocessing toolbox in ArcGIS Desktop or Pro, you will find dozens of tools categorized in the following manner: With these tools, you can build sophisticated models by using ModelBuilder or Python, and then publish them to ArcGIS Server. For example, to perform a buffer with the GeometryEngine tool, you would drag the Buffer tool onto the ModelBuilder canvas, as shown here, and specify its inputs and outputs: This model specifies an input (US cities), performs an operation (Buffer the cities), and then produces an output (Buffered cities). Conceptually, this is programming except that the algorithm is built graphically instead of with code. You may be asking: Why would you use this tool in ArcGIS Desktop or Pro? Good question. Well, ArcGIS Runtime only comes with a few selected tools in GeometryEngine. These tools, such as the buffer method in GeometryEngine, are so common that Esri decided to include them with ArcGIS Runtime so that these kinds of operation could be performed on the client without having to call the server. On the other hand, in order to keep the core of ArcGIS Runtime lightweight, Esri wanted to provide these tools and many more, but make them available as tools that you need to call on when required for special or advanced analysis. As a result, if your app needs basic operations, GeometryEngine may provide what you need. On the other hand, if you need to perform more sophisticated operations, you will need to build the model with Desktop or Pro, published it to Server, and then consume the resulting service with ArcGIS Runtime. The rest of this article will show you how to consume a geoprocessing model using this pattern. Preparing for geoprocessing To perform geoprocessing, you will need to create a model with ModelBuilder and/or Python. For more details on how to create models using ModelBuilder, navigate to http://pro.arcgis.com/en/pro-app/help/analysis/geoprocessing/modelbuilder/what-is-modelbuilder-.htm. To build a model with Python, navigate to http://pro.arcgis.com/en/pro-app/help/analysis/geoprocessing/basics/python-and-geoprocessing.htm. Once you've created a model with ModelBuilder or Python, you will then need to run the tool to ensure that it works and to make it so that it can be published as a geoprocessing service for online use, or as a geoprocessing package for offline use. See here for publishing a service: http://server.arcgis.com/en/server/latest/publish-services/windows/a-quick-tour-of-publishing-a-geoprocessing-service.htm If you plan to use geoprocessing offline, you'll need to publish a geoprocessing package (*.gpk) file. You can learn more about these at https://desktop.arcgis.com/en/desktop/latest/analyze/sharing-workflows/a-quick-tour-of-geoprocessing-packages.htm. Once you have a geoprocessing service or package, you can now consume it with ArcGIS Runtime. In the sections that follow, we will use classes from Esri.ArcGISRuntime.Tasks.Geoprocessing that allow us to consume these geoprocessing services or packages. Online geoprocessing with ArcGIS Runtime Once you have created a geoprocessing model, you will want to access it from ArcGIS Runtime. In this section, we're going to do surface analysis from an online service that Esri has published. To accomplish this, you will need to access the REST endpoint by typing in the following URL: http://sampleserver6.arcgisonline.com/arcgis/rest/services/Elevation/ESRI_Elevation_World/GPServer When you open this page, you'll notice the description and that it has a list of Tasks: A task is a REST child resource of a geoprocessing service. A geoprocessing service can have one or more tasks associated with it. A task requires a set of inputs in the form of parameters. Once the task completes, it will produce some output that you will then use in your app. The output could be a map service, a single value, or even a report. This particular service only has one task associated with it and it is called Viewshed. If you click on the task called Viewshed, you'll be taken to this page: http://sampleserver6.arcgisonline.com/arcgis/rest/services/Elevation/ESRI_Elevation_World/GPServer/Viewshed. This service will produce a viewshed of where the user clicks that looks something like this: The user clicks on the map (X) and the geoprocessing task produces a viewshed, which shows all the areas on the surface that are visible to an observer, as if they were standing on the surface. Once you click on the task, you'll note the concepts marked in the following screenshot: As you can see, beside the red arrows, the geoprocessing service lets you know what is required for it to operate, so let's go over each of these: First, the service lets you know that it is a synchronous geoprocessing service. A synchronous geoprocessing task will run synchronously until it has completed, and block the calling thread. An asynchronous geoprocessing task will run asynchronously, but it won't block the calling thread. The next pieces of information you'll need to provide to the task are the parameters. In the preceding example, the task requires Input_Observation_Point. You will need to provide this exact name when providing the parameter later on, when we write the code to pass in this parameter. Also, note that the Direction value is esriGPParameterDirectionInput. This tells you that the task expects that Input_Observation_Point is an input to the model. Lastly, note that the Parameter Type value is Required. In other words, you must provide the task with this parameter in order for it to run. It's also worth noting that Default Value is an esriGeometryPoint type, which in ArcGIS Runtime is MapPoint. The Spatial Reference value of the point is 540003. If you investigate the remaining required parameters, you'll note that they require a Viewshed_Distance parameter. Now, refer to the following screenshot. If you don't specify a value, it will use Default Value of 15,000 meters. Lastly, this task will output a Viewshed_Result parameter, which is esriGeometryPolygon. Using this polygon, we can then render to the map or scene. Geoprocessing synchronously Now that you've seen an online service, let's look at how we call this service using ArcGIS Runtime. To execute the preceding viewshed task, we first need to create an instance of the geoprocessor object. The geoprocessor object requires a URL down to the task level in the REST endpoint, like this: private const string viewshedServiceUrl = "http://sampleserver6.arcgisonline.com/arcgis/rest/services/ Elevation/ESRI_Elevation_World/GPServer/Viewshed"; private Geoprocessor gpTask; Note that we've attached /Viewshed on the end of the original URL so that we can pass in the completed path to the task. Next, you will then instantiate the geoprocessor in your app, using the URL to the task: gpTask = new Geoprocessor(new Uri(viewshedServiceUrl)); Once we have created the geoprocessor, we can then prompt the user to click somewhere on the map. Let's look at some code: public async void CreateViewshed() { // // get a point from the user var mapPoint = await this.mapView.Editor.RequestPointAsync(); // clear the graphics layers this.viewshedGraphicsLayer.Graphics.Clear(); this.inputGraphicsLayer.Graphics.Clear(); // add new graphic to layer this.inputGraphicsLayer.Graphics.Add(new Graphic{ Geometry = mapPoint, Symbol = this.sms }); // specify the input parameters var parameter = new GPInputParameter() { OutSpatialReference = SpatialReferences.WebMercator }; parameter.GPParameters.Add(new GPFeatureRecordSetLayer("Input_Observation_Point", mapPoint)); parameter.GPParameters.Add(new GPLinearUnit("Viewshed_Distance", LinearUnits.Miles, this.distance)); // Send to the server this.Status = "Processing on server..."; var result = await gpTask.ExecuteAsync(parameter); if (result == null || result.OutParameters == null || !(result.OutParameters[0] is GPFeatureRecordSetLayer)) throw new ApplicationException("No viewshed graphics returned for this start point."); // process the output this.Status = "Finished processing. Retrieving results..."; var viewshedLayer = result.OutParameters[0] as GPFeatureRecordSetLayer; var features = viewshedLayer.FeatureSet.Features; foreach (Feature feature in features) { this.viewshedGraphicsLayer.Graphics.Add(feature as Graphic); } this.Status = "Finished!!"; } The first thing we do is have the user click on the map and return MapPoint. We then clear a couple of GraphicsLayers that hold the input graphic and viewshed graphics, so that the map is cleared every time they run this code. Next, we create a graphic using the location where the user clicked. Now comes the interesting part of this. We need to provide the input parameters for the task and we do that with GPInputParameter. When we instantiate GPInputParameter, we also need to specify the output spatial reference so that the data is rendered in the spatial reference of the map. In this example, we're using the map's spatial reference. Then, we add the input parameters. Note that we've spelled them exactly as the task required them. If we don't, the task won't work. We also learned earlier that this task requires a distance, so we use GPLinearUnit in Miles. The GPLinearUnit class lets the geoprocessor know what kinds of unit to accept. After the input parameters are set up, we then call ExecuteAsync. We are calling this method because this is a synchronous geoprocessing task. Even though this method has Async on the end of it, this applies to .NET, not ArcGIS Server. The alternative to ExecuteAsync is SubmitJob, which we will discuss shortly. After some time, the result comes back and we grab the results using result.OutParameters[0]. This contains the output from the geoprocessing task and we want to use that to then render the output to the map. Thankfully, it returns a read-only set of polygons, which we can then add to GraphicsLayer. If you don't know which parameter to use, you'll need to look it up on the task's page. In the preceding example, the parameter was called Viewshed_Distance and the Data Type value was GPLinearUnit. ArcGIS Runtime comes with a variety of data types to match the corresponding data type on the server. The other supported types are GPBoolean, GPDataFile, GPDate, GPDouble, GPItemID, GPLinearUnit, GPLong, GPMultiValue<T>, GPRasterData, GPRecordSet, and GPString. Instead of manually inspecting a task as we did earlier, you can also use Geoprocessor.GetTaskInfoAsync to discover all of the parameters. This is a useful object if you want to provide your users with the ability to specify any geoprocessing task dynamically while the app is running. For example, if your app requires that users are able to enter any geoprocessing task, you'll need to inspect that task, obtain the parameters, and then respond dynamically to the entered geoprocessing task. Geoprocessing asynchronously So far we've called a geoprocessing task synchronously. In this section, we'll cover how to call a geoprocessing task asynchronously. There are two differences when calling a geoprocessing task asynchronously: You will run the task by executing a method called SubmitJobAsync instead of ExecuteAsync. The SubmitJobAsync method is ideal for long-running tasks, such as performing data processing on the server. The major advantage of SubmitJobAsync is that users can continue working while the task works in the background. When the task is completed, the results will be presented. You will need to check the status of the task with GPJobStatus so that users can get a sense of whether the task is working as expected. To do this, check GPJobStatus periodically and it will return GPJobStatus. The GPJobStatus enumeration has the following values: New, Submitted, Waiting, Executing, Succeeded, Failed, TimedOut, Cancelling, Cancelled, Deleting, or Deleted. With these enumerations, you can poll the server and return the status using CheckJobStatusAsync on the task and present that to the user while they wait for the geoprocessor. Let's take a look at this process in the following diagram: As you can see in the preceding diagram, the input parameters are specified as we did earlier with the synchronous task, the Geoprocessor object is set up, and then SubmitJobAsync is called with the parameters (GOInputParameter). Once the task begins, we then have to check the status of it using the results from SubmitJobAsync. We then use CheckJobStatusAsync on the task to return the status enumeration. If it indicates Succeeded, we then do something with the results. If not, we continue to check the status using any time period we specify. Let's try this out using an example service from Esri that allows for areal analysis. Go to the following REST endpoint: http://serverapps10.esri.com/ArcGIS/rest/services/SamplesNET/USA_Data_ClipTools/GPServer/ClipCounties. In the service, you will note that it's called ClipCounties. This is a rather contrived example, but it shows how to do server-side data processing. It requires two parameters called Input_Features and Linear_unit. It outputs output_zip and Clipped _Counties. Basically, this task allows you to drag a line on the map; it will then buffer it and clip out the counties in the U.S. and show them on the map, like so: We are interested in two methods in this sample app. Let's take a look at them: public async void Clip() { //get the user's input line var inputLine = await this.mapView.Editor.RequestShapeAsync( DrawShape.Polyline) as Polyline; // clear the graphics layers this.resultGraphicsLayer.Graphics.Clear(); this.inputGraphicsLayer.Graphics.Clear(); // add new graphic to layer this.inputGraphicsLayer.Graphics.Add( new Graphic { Geometry = inputLine, Symbol = this.simpleInputLineSymbol }); // add the parameters var parameter = new GPInputParameter(); parameter.GPParameters.Add( new GPFeatureRecordSetLayer("Input_Features", inputLine)); parameter.GPParameters.Add(new GPLinearUnit( "Linear_unit", LinearUnits.Miles, this.Distance)); // poll the task var result = await SubmitAndPollStatusAsync(parameter); // add successful results to the map if (result.JobStatus == GPJobStatus.Succeeded) { this.Status = "Finished processing. Retrieving results..."; var resultData = await gpTask.GetResultDataAsync(result.JobID, "Clipped_Counties"); if (resultData is GPFeatureRecordSetLayer) { GPFeatureRecordSetLayer gpLayer = resultData as GPFeatureRecordSetLayer; if (gpLayer.FeatureSet.Features.Count == 0) { // the the map service results var resultImageLayer = await gpTask.GetResultImageLayerAsync( result.JobID, "Clipped_Counties"); // make the result image layer opaque GPResultImageLayer gpImageLayer = resultImageLayer; gpImageLayer.Opacity = 0.5; this.mapView.Map.Layers.Add(gpImageLayer); this.Status = "Greater than 500 features returned. Results drawn using map service."; return; } // get the result features and add them to the // GraphicsLayer var features = gpLayer.FeatureSet.Features; foreach (Feature feature in features) { this.resultGraphicsLayer.Graphics.Add( feature as Graphic); } } this.Status = "Success!!!"; } } This Clip method first asks the user to add a polyline to the map. It then clears the GraphicsLayer class, adds the input line to the map in red, sets up GPInputParameter with the required parameters (Input_Featurs and Linear_unit), and calls a method named SubmitAndPollStatusAsync using the input parameters. Let's take a look at that method too: // Submit GP Job and Poll the server for results every 2 seconds. private async Task<GPJobInfo> SubmitAndPollStatusAsync(GPInputParameter parameter) { // Submit gp service job var result = await gpTask.SubmitJobAsync(parameter); // Poll for the results async while (result.JobStatus != GPJobStatus.Cancelled && result.JobStatus != GPJobStatus.Deleted && result.JobStatus != GPJobStatus.Succeeded && result.JobStatus != GPJobStatus.TimedOut) { result = await gpTask.CheckJobStatusAsync(result.JobID); foreach (GPMessage msg in result.Messages) { this.Status = string.Join(Environment.NewLine, msg.Description); } await Task.Delay(2000); } return result; } The SubmitAndPollStatusAsync method submits the geoprocessing task and the polls it every two seconds to see if it hasn't been Cancelled, Deleted, Succeeded, or TimedOut. It calls CheckJobStatusAsync, gets the messages of type GPMessage, and adds them to the property called Status, which is a ViewModel property with the current status of the task. We then effectively check the status of the task every 2 seconds with Task.Delay(2000) and continue doing this until something happens other than the GPJobStatus enumerations we're checking for. Once SubmitAndPollStatusAsync has succeeded, we then return to the main method (Clip) and perform the following steps with the results: We obtain the results with GetResultDataAsync by passing in the results of JobID and Clipped_Counties. The Clipped_Counties instance is an output of the task, so we just need to specify the name Clipped_Counties. Using the resulting data, we first check whether it is a GPFeatureRecordSetLayer type. If it is, we then do some more processing on the results. We then do a cast just to make sure we have the right object (GPFeatureRecordsSetLayer). We then check to see if no features were returned from the task. If none were returned, we perform the following steps: We obtain the resulting image layer using GetResultImageLayerAsync. This returns a map service image of the results. We then cast this to GPResultImageLayer and set its opacity to 0.5 so that we can see through it. If the user enters in a large distance, a lot of counties are returned, so we convert the layer to a map image, and then show them the entire country so that they can see what they've done wrong. Having the result as an image is faster than displaying all of the polygons as the JSON objects. Add GPResultImageLayer to the map. If everything worked according to plan, we get only the features needed and add them to GraphicsLayer. That was a lot of work, but it's pretty awesome that we sent this off to ArcGIS Server and it did some heavy processing for us so that we could continue working with our map. The geoprocessing task took in a user-specified line, buffered it, and then clipped out the counties in the U.S. that intersected with that buffer. When you run the project, make sure you pan or zoom around while the task is running so that you can see that you can still work. You could also further enhance this code to zoom to the results when it finishes. There are some other pretty interesting capabilities that we need to discuss with this code, so let's delve a little deeper. Working with the output results Let's discuss the output of the geoprocessing results in a little more detail in this section. GPMesssage The GPMessage object is very helpful because it can be used to check the types of message that are coming back from Server. It contains different kinds of message via an enumeration called GPMessageType, which you can use to further process the message. GPMessageType returns an enumeration of Informative, Warning, Error, Abort, and Empty. For example, if the task failed, GPMessageType.Error will be returned and you can present a message to the user letting them know what happened and what they can do to resolve this issue. The GPMessage object also returns Description, which we used in the preceding code to display to the user as the task executed. The level of messages returned by Server dictates what messages are returned by the task. See Message Level here: If the Message Level field is set to None, no messages will be returned. When testing a geoprocessing service, it can be helpful to set the service to Info because it produces detailed messages. GPFeatureRecordSetLayer The preceding task expected an output of features, so we cast the result to GPFeatureRecordsSetLayer. The GPFeatureRecordsSetLayer object is a layer type which handles the JSON objects returned by the server, which we can then use to render on the map. GPResultMapServiceLayer When a geoprocessing service is created, you have the option of making it produce an output map service result with its own symbology. Refer to http://server.arcgis.com/en/server/latest/publish-services/windows/defining-output-symbology-for-geoprocessing-tasks.htm. You can take the results of a GPFeatureRecordsSetLayer object and access this map service using the following URL format: http://catalog-url/resultMapServiceName/MapServer/jobs/jobid Using JobID, which was produced by SubmitJobAsync, you can add the result to the map like so: ArcGISDynamicMapServiceLayer dynLayer = this.gpTask.GetResultMapServiceLayer(result.JobID); this.mapView.Map.Layers.Add(dynLayer); Summary In this article, we went over spatial analysis at a high level, and then went into the details of how to do spatial analysis with ArcGIS Runtime. We discussed how to create models with ModelBuilder and/or Python, and then went on to show how to use geoprocessing, both synchronously and asynchronously, with online and offline tasks. With this information, you now have a multitude of options for adding a wide variety of analytical tools to your apps. Resources for Article: Further resources on this subject: Building Custom Widgets [article] Learning to Create and Edit Data in ArcGIS [article] ArcGIS – Advanced ArcObjects [article]
Read more
  • 0
  • 0
  • 4486

article-image-delphi-cookbook
Packt
07 Jul 2016
6 min read
Save for later

Delphi Cookbook

Packt
07 Jul 2016
6 min read
In this article by Daniele Teti author of the book Delphi Cookbook - Second Edition we will study about Multithreading. Multithreading can be your biggest problem if you cannot handle it with care. One of the fathers of the Delphi compiler used to say: "New programmers are drawn to multithreading like moths to flame, with similar results." – Danny Thorpe (For more resources related to this topic, see here.) In this chapter, we will discuss some of the main techniques to handle single or multiple background threads. We'll talk about shared resource synchronization and thread-safe queues and events. The last three recipes will talk about the Parallel Programming Library introduced in Delphi XE7, and I hope that you will love it as much as I love it. Multithreaded programming is a huge topic. So, after reading this chapter, although you will not become a master of it, you will surely be able to approach the concept of multithreaded programming with confidence and will have the basics to jump on to more specific stuff when (and if) you require them. Talking with the main thread using a thread-safe queue Using a background thread and working with its private data is not difficult, but safely bringing information retrieved or elaborated by the thread back to the main thread to show them to the user (as you know, only the main thread can handle the GUI in VCL as well as in FireMonkey) can be a daunting task. An even more complex task would be establishing a generic communication between two or more background threads. In this recipe, you'll see how a background thread can talk to the main thread in a safe manner using the TThreadedQueue<T> class. The same concepts are valid for a communication between two or more background threads. Getting ready Let's talk about a scenario. You have to show data generated from some sort of device or subsystem, let's say a serial, a USB device, a query polling on the database data, or a TCP socket. You cannot simply wait for data using TTimer because this would freeze your GUI during the wait, and the wait can be long. You have tried it, but your interface became sluggish… you need another solution! In the Delphi RTL, there is a very useful class called TThreadedQueue<T> that is, as the name suggests, a particular parametric queue (a FIFO data structure) that can be safely used from different threads. How to use it? In the programming field, there is mostly no single solution valid for all situations, but the following one is very popular. Feel free to change your approach if necessary. However, this is the approach used in the recipe code: Create the queue within the main form. Create a thread and inject the form queue to it. In the thread Execute method, append all generated data to the queue. In the main form, use a timer or some other mechanism to periodically read from the queue and display data on the form. How to do it… Open the recipe project called ThreadingQueueSample.dproj. This project contains the main form with all the GUI-related code and another unit with the thread code. The FormCreate event creates the shared queue with the following parameters that will influence the behavior of the queue: QueueDepth = 100: This is the maximum queue size. If the queue reaches this limit, all the push operations will be blocked for a maximum of PushTimeout, then the Push call will fail with a timeout. PushTimeout = 1000: This is the timeout in milliseconds that will affect the thread, that in this recipe is the producer of a producer/consumer pattern. PopTimeout = 1: This is the timeout in milliseconds that will affect the timer when the queue is empty. This timeout must be very short because the pop call is blocking in nature, and you are in the main thread that should never be blocked for a long time. The button labeled Start Thread creates a TReaderThread instance passing the already created queue to its constructor (this is a particular type of dependency injection called constructor injection). The thread declaration is really simple and is as follows: type TReaderThread = class(TThread) private FQueue: TThreadedQueue<Byte>; protected procedure Execute; override; public constructor Create(AQueue: TThreadedQueue<Byte>); end; While the Execute method simply appends randomly generated data to the queue, note that the Terminated property must be checked often so the application can terminate the thread and wait a reasonable time for its actual termination. In the following example, if the queue is not empty, check the termination at least every 700 msec ca: procedure TReaderThread.Execute; begin while not Terminated do begin TThread.Sleep(200 + Trunc(Random(500))); // e.g. reading from an actual device FQueue.PushItem(Random(256)); end; end; So far, you've filled the queue. Now, you have to read from the queue and do something useful with the read data. This is the job of a timer. The following is the code of the timer event on the main form: procedure TMainForm.Timer1Timer(Sender: TObject); var Value: Byte; begin while FQueue.PopItem(Value) = TWaitResult.wrSignaled do begin ListBox1.Items.Add(Format('[%3.3d]', [Value])); end; ListBox1.ItemIndex := ListBox1.Count - 1; end; That's it! Run the application and see how we are reading the data coming from the threads and showing the main form. The following is a screenshot: The main form showing data generated by the background thread There's more… The TThreadedQueue<T> is very powerful and can be used to communicate between two or more background threads in a consumer/producer schema as well. You can use multiple producers, multiple consumers, or both. The following screenshot shows a popular schema used when the speed at which the data generated is faster than the speed at which the same data is handled. In this case, usually you can gain speed on the processing side using multiple consumers. Single producer, multiple consumers Summary In this article we had a look at how to talk to the main thread using a thread-safe queue. Resources for Article: Further resources on this subject: Exploring the Usages of Delphi[article] Adding Graphics to the Map[article] Application Performance[article]
Read more
  • 0
  • 0
  • 16833

article-image-packaging-game
Packt
07 Jul 2016
13 min read
Save for later

Packaging the Game

Packt
07 Jul 2016
13 min read
A game is not just an art, code, and a game design packaged within an executable. You have to deal with stores, publishers, ratings, console providers, and making assets and videos for stores and marketing, among other minor things required to fully ship a game. This article by Muhammad A.Moniem, author of the book Mastering Unreal Engine 4.X, will take care of the last steps you need to do within the Unreal environment in order to get this packaged executable fine and running. Anything post-Unreal, you need to find a way to do it, but from my seat, I'm telling you have done the complex, hard, huge, and long part; what comes next is a lot simpler! This article will help us understand and use Unreal's Project Launcher, patching the project and creating DLCs (downloadable content) (For more resources related to this topic, see here.) Project Launcher and DLCs The first and the most important thing you have to keep in mind is that the project launcher is still in development and the process of creating DLCs is not final yet, and might get changed in the future with upcoming engine releases. While writing this book I've been using Unreal 4.10 and testing everything I do and write within the Unreal 4.11 preview version, and yet still the DLC process remains experimental. So be advised that you might find it a little different in the future as the engine evolves: While we have packaged the game previously through the File menu using Packaging Project, there is another, more detailed, more professional way to do the same job. Using the Project Launcher, which comes in the form of a separate app with Unreal (Unreal Frontend), you have the choice to run it directly from the editor. You can access the Project Launcher from the Windows menu, and then choose Project Launcher, and that will launch it right away. However, I have a question here. Why would you go through these extra steps, then just do the packaging process in one click? Well, extensibility is the answer. Using the Unreal Project Launcher allows you to create several profiles, each profile having a different build setting, and later you can fire each build whenever you need it; not only that, but the profiles could be made for different projects, which means you can have an already made setup for all your projects with all the different build configurations. And yet even that's not everything; it comes in handier when you get the chance to cook the content of a game several times, so rather than keep doing it through the File menu, you can just cook the content for the game for all the different platforms at once. For example; if you have to change one texture within your game which is supported on five platforms, you can make a profile which will cook the content for all the platforms and arrange them for you at once, and you can spend that time doing something else. The Project Launcher does the whole thing for you. What if you have to cook the game content for different languages? Let's say the game supports 10 languages? Do you have to do it one by one for each language? The answer is simple; the Project Launcher will do it for you. So you can simply think of the Project Launcher as a batch process, custom command-line tool, or even a form of build server. You set the configurations and requests, and leave it alone doing the whole thing for you, while you are saving your time doing something else. It is all about productivity! And the most important part about the Project Launcher is that you can create DLCs very easily. By just setting a profile for it with a different set of options and settings, you can end up with getting the DLC or game mode done without any complications. In a word, it is all about profiles, and because of that let's discuss how to create profiles, that could serve different purposes. Sometimes the Project Launcher proposes for you a standard profile that matching the platform you are using. That is good, but usually those profiles might not have all we need, and that's why it is recommended to always create new profiles to serve our goals. The Project Launcher by default is divided into two sections vertically; the upper part contains the default profiles, while the lower part contains the custom profiles. And in order to create a new profile all you have to do is to hit the plus sign at the bottom part, where it is titled Custom Launch Profiles: Pressing it will take you to a wizard, or it is better to describe this as a window, where you can set up the new profile options. Those options are drastic, and changing between them leads to a completely different result, so you have to be careful. But in general, you mostly will be building either a project for release, or building a DLC or patch for an already released project. Not to mention that you can even do more types of building that serve different goals, such as a language package for an already released game, which is treated as a patch or DLC but at the same time it has different a setup and options than a patch or DLC. Anyway, we will be taking care of the two main types of process that developers usually have to deal with in the Project Launcher: release and patch. Packaging a release After the new Custom Launch Profile wizard window opens, you have changes for its settings that are necessary to make our Release build of the project. This includes: General: This has the following fields: Give a name to the profile, and this name will be displayed in the Project Launcher main window Give a description to the profile in order to make its goals clear for you in the future, or for anyone else who is going to use it Project: This has the following sections: Select a project, the one that needs to be built. Or you can leave this at Any Project, in order to build the current active project: Build: This has the following sections: Indeed, you have to check the box of the build, so you make a build and activate this section of options. From the Build Configuration dropdown, you have to choose a build type, which is Shipping in this case. Finally, you can check the Build UAT (Unreal Automation Tool) option from Advanced Settings in this section. The UAT could be considered as a bunch of scripts creating a set of automated processes, but in order to decide whether to run it or not, you have to really understand what the UAT is: Written in C# (may convert to C++ in the future) Automates repetitive tasks through automation scripts Builds, cooks, packages, deploys and launches projects Invokes UBT for compilation Analyzes and fixes game content files Codes surgery when updating to new engine versions Distributes compilation (XGE) and build system integration Generates code documentation Automates testing of code and content And many others—you can add your own scripts! Now you will know if you want to enable it or not: Cook: This has the following settings: In the Cook section, you need to set it to by the book. This means you need to define what exactly is needed to be cooked and for which platforms it is enough for now to set it to WindowsNoEditor, and check the cultures you want from the list. I have chosen all of them (this is faster than picking one at a time) and then exclude the ones that I don't want: Then you need to check which maps should be cooked; if you can't see maps, it is probably the first build. Later you'll find the maps listed. But anyway, you must keep all the maps listed in the Maps folder under the Content directory: Now from the Release / DLC / Patching Settings section, you have to check the option Create a release version of the game for distribution, as this version going to be the distribution one. And from the same section give the build a version. This is going to create some extra files that will be used in the future if we are going to create patches or DLCs: You can expand the Advanced Settings section to set your own options. By default, Compress Content and Save Packages without versions are both checked, and both are good for the type of build we are making. But also you can set Store all content in a single file (UnrealPak) to keep things tidy; one .pak file is better than lots of separated files. Finally, you can set Cooker Build Configuration to Shipping, as long as we set Build Configuration itself to Shipping: Package: This has the following options: From this section's drop-down menu, choose Package & store locally, and that will save the packages on the drive. You can't set anything else here, unless you want to store the game packaged project into a repository: Deploy: The Deploy section is meant to build the game into the device of your choice, and I don't think it is the case here, or anyone will want to do it. If you want to put the game into a device, you could directly do Launch from within the editor itself. So, let's set this section to Do Not Deploy: Launch: In case you have chosen to deploy the game into a device, then you'll be able to find this section; otherwise, the options here will be disabled. The set of options here is meant to choose the configurations of the deployed build, as once it is deployed to the device it will run. Here you can set something like the language culture, the default startup map, command-line arguments, and so on. And as we are not deploying now, this section will be disabled: Now we have finished editing our profile, you can find a back arrow at the top of this wizard. Pressing it will take you back to the Project Launcher main window: Now you can find our profile in the bottom section. Any other profiles you'll be making in the future will be listed there. Now there is one step to finish the build. In the right corner of the profile there is a button that says Launch This Profile. Hitting it will start the process of building, cooking, and packaging this profile for the selected project. Hit it right away if you want the process to start. And keep in mind, anytime you need to change any of the previously set settings, there is always an Edit button for each profile: The Project Launcher will start processing this profile; it will take some time, but the amount of time depends on your choices. And you'll be able to see all the steps while it is happening. Not only this, but you can also watch a detailed log; you can save this log, or you can even cancel the process at any time: Once everything is done, a new button will appear at the bottom: Done. Hitting it will take you back again to the Project Launcher main window. And you can easily find the build in the SavedStagedBuildsWindowsNoEditor directory of your project, which is in my case: C:UsersMuhammadDesktopBellzSavedStagedBuildsWindowsNoEditor. The most important thing now is that, if you are planning to create patches or DLCs for this project, remember when you set a version number in the Cook section. This produced some files that you can find in: ProjectNameReleaseReleaseVersionPlatform. Which in my case is: C:UsersMuhammadDesktopBellzReleases1.0WindowsNoEditor. There are two files; you have to make sure that you have a backup of them on your drive for future use. Now you can ship the game and upload it to the distribution channel! Packaging a patch or DLC The good news is, there isn't much to do here. Or in other words, you have to do lots of things, but it is a repetitive process. You'll be creating a new profile in the Project Launcher, and you'll be setting 90% of the options so they're the same as the previous release profile; the only difference will be in the Cook options. Which means the settings that will remain the same are: Project Build Package Deploy Launch The only difference is that in the Release/DLC/Patching Settings section of the Cook section you have to: Disable Create a release version of the game for distribution. Set the number of the base build (the release) as the release version this is based on, as this choice will make sure to compare the previous content with the current one. Check Generate patch, if the current build is a patch, not a DLC. Check Build DLC, if the current build is a DLC, not a patch: Now you can launch this profile, and wait until it is done. The patching process creates a *.pak file in the directory: ProjectNameSavedStagedBuildsPlatformNameProjectNameContentPaks. This .pak file is the patch that you'll be uploading to the distribution channel! And the most common way to handle these type of patch is by creating installers; in this case, you'll create an installer to copy the *.pak file into the player's directory: ProjectNameReleasesVersionNumberPlatformName. Which means, it is where the original content *.pak file of the release version is. In my case I copy the *.pak file from: C:UsersMuhammadDesktopBellzReleases1.0WindowsNoEditor to: C:UsersMuhammadDesktopBellzSavedStagedBuildsWindowsNoEditorBellzContentPaks. Now you've found the way to patch and download content, and you have to know, regardless of the time you have spent creating it, it will be faster in the future, because you'll be getting more used to it, and Epic is working on making the process better and better. Summary The Project Launcher is a very powerful tool shipped with the Unreal ecosystem. Using it is not mandatory, but sometimes it is needed to save time, and you learned how and when to use this powerful tool. Many games nowadays have downloadable content; it helps to keep the game community growing, and the game earn more revenue. Having DLCs is not essential, but it is good, having them must be planned earlier as we discussed, and you've learned how to manage them within Unreal Engine. And you learned how to make patches and DLCs using the Unreal Project Launcher. Resources for Article: Further resources on this subject: Development Tricks with Unreal Engine 4 [article] Bang Bang – Let's Make It Explode [article] Lighting basics [article]
Read more
  • 0
  • 0
  • 17324

article-image-functional-programming-c
Packt
07 Jul 2016
4 min read
Save for later

Functional Programming in C#

Packt
07 Jul 2016
4 min read
In this article, we are going to explore the following topics: The introductory of functional programming concept Comparison between functional and imperative approach (For more resources related to this topic, see here.) Introduction to functional programming In functional programming, we use mathematic approach to construct our code. The function we've got in the code has similarity with mathematical function we usually use in our daily basis. The variable in the code function represents the value of the function parameter and it similar to the mathematical function. The idea is a programmer defines the functions, which contain the expression, definition, and also parameters which can be expressed by variable in order to solve the problems. After a programmer builds the function and sends computer the function, it's now computer's turn to do its job. In general, the role of computer is to evaluate the expression in the function and return the result. We can imagine that the computer acts like a calculator since it will analyse the expression from the function and yield the result to the user in printed format. Suppose we have the expression 3 + 5 inside a function. The computer will definitely return 8 as the result just after completely evaluates it. However, it is just the trivial example of the acting of computer in evaluating the expression. In fact, the programmer can increase the ability of the computer by making the complex definition and expression inside the function. Not only can the computer evaluate the trivial expression, but it also can evaluate complex calculation and expression. Comparison to imperative programming The main difference between functional and imperative programming is the existence of side effect. In functional programming, since it applies pure function concept, the side effect is avoided. It's different with imperative programming which has to access I/O and modify state outside the function which will produce side effect. In addition, with an imperative approach, the programmer focuses on the way of performing the task and tracking changes in state while a programmer focuses on the kind of desired information and the kind of required transformation in functional approach. The change of states becomes important in imperative programming while no change of states exist in functional programming. The order of execution is also important in imperative function but not really important in functional programming since we need to concern more on constructing the problem as a set of functions to be execute rather than the detail step of the flow. We will continue our discussion about functional and imperative approach by creating some code in the next topics. Summary We have been acquainted with functional approach so far by discussing the introduction of functional programming. We also have compared the functional approach with mathematical concept in function. It's now clear that functional approach uses the mathematical approach to compose functional program. The comparison between functional and imperative programming has also give us the important point to distinguish the two. It's now clear that in functional programming the programmer focuses on the kind of desired information and the kind of required transformation while in imperative approach the programmer focuses on the way of performing the task and tracking changes in state. For more information on C#, visit the following books: C# 5 First Look (https://www.packtpub.com/application-development/c-5-first-look) C# Multithreaded and Parallel Programming (https://www.packtpub.com/application-development/c-multithreaded-and-parallel-programming) C# 6 and .NET Core 1.0: Modern Cross-Platform Development (https://www.packtpub.com/application-development/c-6-and-net-core-10) Resources for Article: Further resources on this subject: Introduction to Object-Oriented Programming using Python, JavaScript, and C#[article] C# Language Support for Asynchrony[article] C# with NGUI[article]
Read more
  • 0
  • 0
  • 68953

article-image-angulars-component-architecture
Packt
07 Jul 2016
11 min read
Save for later

Angular's component architecture

Packt
07 Jul 2016
11 min read
In this article by Gion Kunz, author of the book Mastering Angular 2 Components, has explained the concept of directives from the first version of Angular changed the game in frontend UI frameworks. This was the first time that I felt that there was a simple yet powerful concept that allowed the creation of reusable UI components. Directives could communicate with DOM events or messaging services. They allowed you to follow the principle of composition, and you could nest directives and create larger directives that solely consisted of smaller directives arranged together. Actually, directives were a very nice implementation of components for the browser. (For more resources related to this topic, see here.) In this section, we'll look into the component-based architecture of Angular 2 and how the previous topic about general UI components will fit into Angular. Everything is a component As an early adopter of Angular 2 and while talking to other people about it, I got frequently asked what the biggest difference is to the first version. My answer to this question was always the same. Everything is a component. For me, this paradigm shift was the most relevant change that both simplified and enriched the framework. Of course, there are a lot of other changes with Angular 2. However, as an advocate of component-based user interfaces, I've found that this change is the most interesting one. Of course, this change also came with a lot of architectural changes. Angular 2 supports the idea of looking at the user interface holistically and supporting composition with components. However, the biggest difference to its first version is that now your pages are no longer global views, but they are simply components that are assembled from other components. If you've been following this chapter, you'll notice that this is exactly what a holistic approach to user interfaces demands. No more pages but systems of components. Angular 2 still uses the concept of directives, although directives are now really what the name suggests. They are orders for the browser to attach a given behavior to an element. Components are a special kind of directives that come with a view. Creating a tabbed interface component Let's introduce a new UI component in our ui folder in the project that will provide us with a tabbed interface that we can use for composition. We use what we learned about content projection in order to make this component reusable. We'll actually create two components, one for Tabs, which itself holds individual Tab components. First, let's create the component class within a new tabs/tab folder in a file called tab.js: import {Component, Input, ViewEncapsulation, HostBinding} from '@angular/core'; import template from './tab.html!text'; @Component({ selector: 'ngc-tab', host: { class: 'tabs__tab' }, template, encapsulation: ViewEncapsulation.None }) export class Tab { @Input() name; @HostBinding('class.tabs__tab--active') active = false; } The only state that we store in our Tab component is whether the tab is active or not. The name that is displayed on the tab will be available through an input property. We use a class property binding to make a tab visible. Based on the active flag we set a class; without this, our tabs are hidden. Let's take a look at the tab.html template file of this component: <ng-content></ng-content> This is it already? Actually, yes it is! The Tab component is only responsible for the storage of its name and active state, as well as the insertion of the host element content in the content projection point. There's no additional templating that is needed. Now, we'll move one level up and create the Tabs component that will be responsible for the grouping all the Tab components. As we won't include Tab components directly when we want to create a tabbed interface but use the Tabs component instead, this needs to forward content that we put into the Tabs host element. Let's look at how we can achieve this. In the tabs folder, we will create a tabs.js file that contains our Tabs component code, as follows: import {Component, ViewEncapsulation, ContentChildren} from '@angular/core'; import template from './tabs.html!text'; // We rely on the Tab component import {Tab} from './tab/tab'; @Component({ selector: 'ngc-tabs', host: { class: 'tabs' }, template, encapsulation: ViewEncapsulation.None, directives: [Tab] }) export class Tabs { // This queries the content inside <ng-content> and stores a // query list that will be updated if the content changes @ContentChildren(Tab) tabs; // The ngAfterContentInit lifecycle hook will be called once the // content inside <ng-content> was initialized ngAfterContentInit() { this.activateTab(this.tabs.first); } activateTab(tab) { // To activate a tab we first convert the live list to an // array and deactivate all tabs before we set the new // tab active this.tabs.toArray().forEach((t) => t.active = false); tab.active = true; } } Let's observe what's happening here. We used a new @ContentChildren annotation, in order to query our inserted content for directives that match the type that we pass to the decorator. The tabs property will contain an object of the QueryList type, which is an observable list type that will be updated if the content projection changes. You need to remember that content projection is a dynamic process as the content in the host element can actually change, for example, using the NgFor or NgIf directives. We use the AfterContentInit lifecycle hook, which we've already briefly discussed in the Custom UI elements section of Chapter 2, Ready, Set, Go! This lifecycle hook is called after Angular has completed content projection on the component. Only then we have the guarantee that our QueryList object will be initialized, and we can start working with child directives that were projected as content. The activateTab function will set the Tab components active flag, deactivating any previous active tab. As the observable QueryList object is not a native array, we first need to convert it using toArray() before we start working with it. Let's now look at the template of the Tabs component that we created in a file called tabs.html in the tabs directory: <ul class="tabs__tab-list"> <li *ngFor="let tab of tabs"> <button class="tabs__tab-button" [class.tabs__tab-button--active]="tab.active" (click)="activateTab(tab)">{{tab.name}}</button> </li> </ul> <div class="tabs__l-container"> <ng-content select="ngc-tab"></ng-content> </div> The structure of our Tabs component is as follows. First we render all the tab buttons in an unordered list. After the unordered list, we have a tabs container that will contain all our Tab components that are inserted using content projection and the <ng-content> element. Note that the selector that we use is actually the selector we use for our Tab component. Tabs that are not active will not be visible because we control this using CSS on our Tab component class attribute binding (refer to the Tab component code). This is all that we need to create a flexible and well-encapsulated tabbed interface component. Now, we can go ahead and use this component in our Project component to provide a segregation of our project detail information. We will create three tabs for now where the first one will embed our task list. We will address the content of the other two tabs in a later chapter. Let's modify our Project component template in the project.html file as a first step. Instead of including our TaskList component directly, we now use the Tabs and Tab components to nest the task list into our tabbed interface: <ngc-tabs> <ngc-tab name="Tasks"> <ngc-task-list [tasks]="tasks" (tasksUpdated)="updateTasks($event)"> </ngc-task-list> </ngc-tab> <ngc-tab name="Comments"></ngc-tab> <ngc-tab name="Activities"></ngc-tab> </ngc-tabs> You should have noticed by now that we are actually nesting two components within this template code using content projection, as follows: First, the Tabs component uses content projection to select all the <ngc-tab> elements. As these elements happen to be components too (our Tab component will attach to elements with this name), they will be recognized as such within the Tabs component once they are inserted. In the <ngc-tab> element, we then nest our TaskList component. If we go back to our Task component template, which will be attached to elements with the name ngc-tab, we will have a generic projection point that inserts any content that is present in the host element. Our task list will effectively be passed through the Tabs component into the Tab component. The visual efforts timeline Although the components that we created so far to manage efforts provide a good way to edit and display effort and time durations, we can still improve this with some visual indication. In this section, we will create a visual efforts timeline using SVG. This timeline should display the following information: The total estimated duration as a grey background bar The total effective duration as a green bar that overlays on the total estimated duration bar A yellow bar that shows any overtime (if the effective duration is greater than the estimated duration) The following two figures illustrate the different visual states of our efforts timeline component: The visual state if the estimated duration is greater than the effective duration The visual state if the effective duration exceeds the estimated duration (the overtime is displayed as a yellow bar) Let's start fleshing out our component by creating a new EffortsTimeline Component class on the lib/efforts/efforts-timeline/efforts-timeline.js path: … @Component({ selector: 'ngc-efforts-timeline', … }) export class EffortsTimeline { @Input() estimated; @Input() effective; @Input() height; ngOnChanges(changes) { this.done = 0; this.overtime = 0; if (!this.estimated && this.effective || (this.estimated && this.estimated === this.effective)) { // If there's only effective time or if the estimated time // is equal to the effective time we are 100% done this.done = 100; } else if (this.estimated < this.effective) { // If we have more effective time than estimated we need to // calculate overtime and done in percentage this.done = this.estimated / this.effective * 100; this.overtime = 100 - this.done; } else { // The regular case where we have less effective time than // estimated this.done = this.effective / this.estimated * 100; } } } Our component has three input properties: estimated: This is the estimated time duration in milliseconds effective: This is the effective time duration in milliseconds height: This is the desired height of the efforts timeline in pixels In the OnChanges lifecycle hook, we set two component member fields, which are based on the estimated and effective time: done: This contains the width of the green bar in percentage that displays the effective duration without overtime that exceeds the estimated duration overtime: This contains the width of the yellow bar in percentage that displays any overtime, which is any time duration that exceeds the estimated duration Let's look at the template of the EffortsTimeline component and see how we can now use the done and overtime member fields to draw our timeline. We will create a new lib/efforts/efforts-timeline/efforts-timeline.html file: <svg width="100%" [attr.height]="height"> <rect [attr.height]="height" x="0" y="0" width="100%" class="efforts-timeline__remaining"></rect> <rect *ngIf="done" x="0" y="0" [attr.width]="done + '%'" [attr.height]="height" class="efforts-timeline__done"></rect> <rect *ngIf="overtime" [attr.x]="done + '%'" y="0" [attr.width]="overtime + '%'" [attr.height]="height" class="efforts-timeline__overtime"></rect> </svg> Our template is SVG-based, and it contains three rectangles for each of the bars that we want to display. The background bar that will be visible if there is remaining effort will always be displayed. Above the remaining bar, we conditionally display the done and the overtime bar using the calculated widths from our component class. Now, we can go ahead and include the EffortsTimeline class in our Efforts component. This way our users will have visual feedback when they edit the estimated or effective duration, and it provides them a sense of overview. Let's look into the template of the Efforts component to see how we integrate the timeline: … <ngc-efforts-timeline height="10" [estimated]="estimated" [effective]="effective"> </ngc-efforts-timeline> As we have the estimated and effective duration times readily available in our Efforts component, we can simply create a binding to the EffortsTimeline component input properties: The Efforts component displaying our newly-created efforts timeline component (the overtime of six hours is visualized with the yellow bar) Summary In this article, we learned about the architecture of the components in Angular. We also learned how to create a tabbed interface component and how to create a visual efforts timeline using SVG. Resources for Article: Further resources on this subject: Angular 2.0[article] AngularJS Project[article] AngularJS[article]
Read more
  • 0
  • 0
  • 8756

Packt
07 Jul 2016
5 min read
Save for later

AIO setup of OpenStack – preparing the infrastructure code environment

Packt
07 Jul 2016
5 min read
Viewing your OpenStack infrastructure deployment as code will not only simplify node configuration, but also improve the automation process. Despite the existence of numerous system-management tools to bring our OpenStack up and running in an automated way, we have chosen Ansible for automation of our infrastructure. (For more resources related to this topic, see here.) At the end of the day you can choose to use any automation tool that fits your production need, the key point to keep in mind is that to manage a big production environment you must simplify operation by: Automating deployment and operation as much as possible Tracking your changes in a version control system Continuous integration of code to keep you infrastructure updated and bug free Monitoring and testing your infrastructure code to make it robust. We have chosen Git to be our version control system. Let's go ahead and install the Git package on our development system. Check the correctness of the Git installation: If you decide to use IDE like eclipse for you development, it might be easier to install a Git plugin to integrate Git to your IDE. For example, the EGit plugin can be used to develop with Git in Eclipse. We do this by navigating to the Help | Install new software menu entry. You will need to add the following URL to install EGit: http://download.eclipse.org/egit/updates. Preparing the development setup The install process is divided into the following steps: Checkout the OSA repository. Install and bootstrap Ansible. Initial host bootstrap. Run playbooks. Configuring your setup The AIO development environment used the configuration files in the test/roles/bootstrap-host/defaults/main.yml file. This file describes the default values for the host configuration. In addition to the configuration, file the configuration options can be passed through shell environment variables. The BOOTSTRAP_OPTS variable is read by the bootstrap script as a space separated key-value pair. It can be used to pass values to override the default ones in the configuration file: export BOOTSTRAP_OPTS="${BOOTSTRAP_OPTS} bootstrap_host_loopback_cinder_size=512" OSA also allows overriding default values for service configuration. These override values are provides in the etc/openstack_deploy/user_variables.yml file. The following is an example of overriding the values in nova.conf using the override file: nova_nova_conf_overrides: DEFAULT: remove_unused_original_minimum_age_seconds: 43200 libvirt: cpu_mode: host-model disk_cachemodes: file=directsync,block=none database: idle_timeout: 300 max_pool_size: 10 This override file will populate the nova.conf file with the following options: [DEFAULT] remove_unused_original_minimum_age_seconds = 43200 [libvirt] cpu_mode = host-model disk_cachemodes = file=directsync,block=none [database] idle_timeout = 300 max_pool_size = 10 The override variables can also be passed using a per host configuration stanza in /etc/openstack_deploy/openstack_user_config.yml. The complete set of configuration options are described in the OpenStack Ansible documentation at http://docs.openstack.org/developer/openstack-ansible/install-guide/configure-openstack.html. Building the development setup To start the installation process, execute the Ansible bootstrap script. This script will download and install the correct Ansible version. It also creates a wrapper script around ansible-playbook called openstack-ansible that always loads the OpenStack user variable files. Next step is to configure the system for the all-in-one setup. This script does the following tasks: Applies Ansible roles to install the basic software requirements like openSSH and pip. It also applies the bootstrap_host role to check the hard-disk and swap space Create various loopback volumes for use with Cinder, Swift and Nova Prepares networking Finally, we run the playbooks to bring up the AIO development environment. This script will execute the following tasks: Creates the LXC containers Applies security hardening to the host Reinitiates the network bridges Install the infrastructure services like MySQL, RabbitMQ, memcached, and more Finally it installs the various OpenStack services Running the playbooks take a long time to build the containers and start the OpenStack services. Once finished you will have all the OpenStack services running in their private container. You can use the lxc-ls command to list the service containers on the development machine. Use the lxc-attach command to connect to any container as shown here: lxc-attach --name <name_of_container> Use the name of the container from the output of lxc-ls to attach to the container. LXC commands can be used to star and stop the service containers. The AIO environment brings MySQL cluster, which needs special care to start the MySQL cluster if the development machine is rebooted. Details of operating the AIO environment are available in the OpenStack Ansible QuickStart guide at http://docs.openstack.org/developer/openstack-ansible/developer-docs/quickstart-aio.html. Tracking your changes The OSA project itself is maintains its code under version control at the OpenStack git server (http://git.openstack.org/cgit/openstack/openstack-ansible/tree/). The configuration files of OSA are stored at /etc/openstack_ansible/ on the deployment host. These files define the deployment environment and the user override variables. To make sure that you control the deployment environment it is important that the changes to these configuration files are tracked in a version control system. To make sure that you track the development environment make sure that the Vagrant configuration files are also tracked in version control system. Summary So far, we’ve deployed a basic AIO setup of OpenStack. Mastering OpenStack Second Edition will take you through the process of extending our design by clustering, defining the various infrastructure nodes, controller, and compute hosts. Resources for Article: Further resources on this subject: Concepts for OpenStack[article] Introducing OpenStack Trove[article] OpenStack Performance, Availability[article]
Read more
  • 0
  • 0
  • 8467
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-animating-elements
Packt
05 Jul 2016
17 min read
Save for later

Animating Elements

Packt
05 Jul 2016
17 min read
In this article by Alex Libby, author of the book Mastering PostCSS for Web Design, you will study about animating elements. A question if you had the choice of three websites: one static, one with badly done animation, and one that has been enhanced with subtle use of animation. Which would you choose? Well, my hope is the answer to that question should be number three: animation can really make a website stand out if done well, or fail miserably if done badly! So far, our content has been relatively static, save for the use of media queries. It's time though to take a look at how PostCSS can help make animating content a little easier. We'll begin with a quick recap on the basics of animation before exploring the route to moving away from pure animation through to SASS and finally across to PostCSS. We will cover a number of topics throughout this article, which will include: A recap on the use of jQuery to animate content Switching to CSS-based animation Exploring the use of prebuilt libraries, such as Animate.css (For more resources related to this topic, see here.) Let's make a start! Revisiting basic animations Animation is quickly becoming a king in web development; more and more websites are using animations to help bring life and keep content fresh. If done correctly, they add an extra layer of experience for the end user; if done badly, the website will soon lose more custom than water through a sieve! Throughout the course of the article, we'll take a look at making the change from writing standard animation through to using processors, such as SASS, and finally, switching to using PostCSS. I can't promise you that we'll be creating complex JavaScript-based demos, such as the Caaaat animation (http://roxik.com/cat/ try resizing the window!), but we will see that using PostCSS is really easy when creating animations for the browser. To kick off our journey, we'll start with a quick look at the traditional animation. How many times have you had to use .animate() in jQuery over the years? Thankfully, we have the power of CSS3 to help with simple animations, but there was a time when we had to animate content using jQuery. As a quick reminder, try running animate.html from the T34 - Basic animation using jQuery animate() folder. It's not going to set the world on fire, but is a nice reminder of the times gone by, when we didn't know any better: If we take a look at a profile of this animation from within a DOM inspector from within a browser, such as Firefox, it would look something like this screenshot: While the numbers aren't critical, the key point here are the two dotted green lines and that the results show a high degree of inconsistent activity. This is a good indicator that activity is erratic, with a low frame count, resulting in animations that are jumpy and less than 100% smooth. The great thing though is that there are options available to help provide smoother animations; we'll take a brief look at some of the options available before making the change to using PostCSS. For now though, let's make that first step to moving away from using jQuery, beginning with a look at the options available for reducing dependency on the use of .animate() or jQuery. Moving away from jQuery Animating content can be a contentious subject, particularly if jQuery or JavaScript is used. If we were to take a straw poll of 100 people and ask which they used, it is very likely that we would get mixed answers! A key answer of "it depends" is likely to feature at or near the top of the list of responses; many will argue that animating content should be done using CSS, while others will affirm that JavaScript-based solutions still have value. Leaving this aside, shall we say lively debate? If we're looking to move away from using jQuery and in particular .animate(), then we have some options available to us: Upgrade your version of jQuery! Yes, this might sound at odds with the theme of this article, but the most recent versions of jQuery introduced the use of requestAnimationFrame, which improved performance, particularly on mobile devices. A quick and dirty route is to use the jQuery Animate Enhanced plugin, available from http://playground.benbarnett.net/jquery-animate-enhanced/ - although a little old, it still serves a useful purpose. It will (where possible) convert .animate() calls into CSS3 equivalents; it isn't able to convert all, so any that are not converted will remain as .animate() calls. Using the same principle, we can even take advantage of the JavaScript animation library, GSAP. The Greensock team have made available a plugin (from https://greensock.com/jquery-gsap-plugin) that replaces jQuery.animate() with their own GSAP library. The latter is reputed to be 20 times faster than standard jQuery! With a little effort, we can look to rework our existing code. In place of using .animate(), we can add the equivalent CSS3 style(s) into our stylesheet and replace existing calls to .animate() with either .removeClass() or .addClass(), as appropriate. We can switch to using libraries, such as Transit (http://ricostacruz.com/jquery.transit/). It still requires the use of jQuery, but gives better performance than using the standard .animate() command. Another alternative is Velocity JS by Jonathan Shapiro, available from http://julian.com/research/velocity/; this has the benefit of not having jQuery as a dependency. There is even talk of incorporating all or part of the library into jQuery, as a replacement for .animate(). For more details, check out the issue log at https://github.com/jquery/jquery/issues/2053. Many people automatically assume that CSS animations are faster than JavaScript (or even jQuery). After all, we don't need to call an external library (jQuery); we can use styles that are already baked into the browser, right? The truth is not as straightforward as this. In short, the right use of either will depend on your requirements and the limits of each method. For example, CSS animations are great for simple state changes, but if sequencing is required, then you may have to resort to using the JavaScript route. The key, however, is less in the method used, but more in how many frames per second are displayed on the screen. Most people cannot distinguish above 60fps. This produces a very smooth experience. Anything less than around 25FPS will produce blur and occasionally appear jerky – it's up to us to select the best method available, that produces the most effective solution. To see the difference in frame rate, take a look at https://frames-per-second.appspot.com/ the animations on this page can be controlled; it's easy to see why 60FPS produces a superior experience! So, which route should we take? Well, over the next few pages, we'll take a brief look at each of these options. In a nutshell, they are all methods that either improve how animations run or allow us to remove the dependency on .animate(), which we know is not very efficient! True, some of these alternatives still use jQuery, but the key here is that your existing code could be using any or a mix of these methods. All of the demos over the next few pages were run at the same time as a YouTube video was being run; this was to help simulate a little load and get a more realistic comparison. Running animations under load means less graphics processing power is available, which results in a lower FPS count. Let's kick off with a look at our first option—the Transit JS library. Animating content with Transit.js In an ideal world, any project we build will have as few dependencies as possible; this applies equally to JavaScript or jQuery-based content as CSS styling. To help with reducing dependencies, we can use libraries such as TransitJS or Velocity to construct our animations. The key here is to make use of the animations that these libraries create as a basis for applying styles that we can then manipulate using .addClass() or .removeClass(). To see what I mean, let's explore this concept with a simple demo: We'll start by opening up a copy of animate.html. To make it easier, we need to change the reference to square-small from a class to a selector: <div id="square-small"></div> Next, go ahead and add in a reference to the Transit library immediately before the closing </head> tag: <script src="js/jquery.transit.min.js"></script> The Transit library uses a slightly different syntax, so go ahead and update the call to .animate() as indicated: smallsquare.transition({x: 280}, 'slow'); Save the file and then try previewing the results in a browser. If all is well, we should see no material change in the demo. But the animation will be significantly smoother—the frame count is higher, at 44.28fps, with less dips. Let's compare this with the same profile screenshot taken for revisiting basic animations earlier in this article. Notice anything? Profiling browser activity can be complex, but there are only two things we need to concern ourselves with here: the fps value and the state of the green line. The fps value, or frames per second, is over three times higher, and for a large part, the green line is more consistent with fewer more short-lived dips. This means that we have a smoother, more consistent performance; at approximately 44fps, the average frame rate is significantly better than using standard jQuery. But we're still using jQuery! There is a difference though. Libraries such as Transit or Velocity convert animations where possible to CSS3 equivalents. If we take a peek under the covers, we can see this in the flesh: We can use this to our advantage by removing the need to use .animate() and simply use .addClass() or .removeClass(). If you would like to compare our simple animation when using Transit or Velocity, there are examples available in the code download, as demos T35A and T35B, respectively. To take it to the next step, we can use the Velocity library to create a version of our demo using plain JavaScript. We'll see how as part of the next demo. Beware though this isn't an excuse to still use JavaScript; as we'll see, there is little difference in the frame count! Animating with plain JavaScript Many developers are used to working with jQuery. After all, it makes it a cinch to reference just about any element on a page! Sometimes though, it is preferable to work in native JavaScript; this could be for speed. If we only need to support newer browsers (such as IE11 or Edge, and recent versions of Chrome or Firefox), then adding jQuery as a dependency isn't always necessary. The beauty about libraries, such as Transit (or Velocity), means that we don't always have to use jQuery to still achieve the same effect; as we'll see shortly, removing jQuery can help improve matters! Let's put this to the test and adapt our earlier demo to work without using jQuery: We'll start by extracting a copy of the T35B folder from the code download bundle. Save this to the root of our project area. Next, we need to edit a copy of animate.html within this folder. Go ahead and remove the link to jQuery and then remove the link to velocity.ui.min.js; we should be left with this in the <head> of our file: <link rel="stylesheet" type="text/css" href="css/style.css">   <script src="js/velocity.min.js"></script> </head> A little further down, alter the <script> block as shown:   <script>     var smallsquare = document.getElementById('square-small');     var animbutton = document.getElementById('animation-button');     animbutton.addEventListener("click", function() {       Velocity(document.getElementById('square-small'), {left: 280}, {duration: 'slow'});     }); </script> Save the file and then preview the results in a browser. If we monitor performance of our demo using a DOM Inspector, we can see a similar frame rate being recorded in our demo: With jQuery as a dependency no longer in the picture, we can clearly see that the frame rate is improved—the downside though is that the support is reduced for some browsers, such as IE8 or 9. This may not be an issue for your website; both Microsoft and the jQuery Core Team have announced changes to drop support for IE8-10 and IE8 respectively, which will help encourage users to upgrade to newer browsers. It has to be said though that while using CSS3 is preferable for speed and keeping our pages as lightweight as possible, using Velocity does provide a raft of extra opportunities that may be of use to your projects. The key here though is to carefully consider if you really do need them or whether CSS3 will suffice and allow you to use PostCSS. Switching classes using jQuery At this point, there is one question that comes to mind—what about using class-based animation? By this, I mean dropping any dependency on external animation libraries, and switching to use plain jQuery with either .addClass() or .removeClass() methods. In theory, it sounds like a great idea—we can remove the need to use .animate() and simply swap classes as needed, right? Well, it's an improvement, but it is still lower than using a combination of pure JavaScript and switching classes. It will all boil down to a trade-off between using the ease of jQuery to reference elements against pure JavaScript for speed, as follows: 1.      We'll start by opening a copy of animate.html from the previous exercise. First, go ahead and replace the call to VelocityJS with this line within the <head> of our document: <script src="js/jquery.min.js"></script> 2.      Next, remove the code between the <script> tags and replace it with this: var smallsquare = $('.rectangle').find('.square-small'); $('#animation-button').on("click", function() {       smallsquare.addClass("move");       smallsquare.one('transitionend', function(e) {     $('.rectangle').find('.square-small') .removeClass("move");     });  }); 3.      Save the file. If we preview the results in a browser, we should see no apparent change in how the demo appears, but the transition is marginally more performant than using a combination of jQuery and Transit. The real change in our code though, will be apparent if we take a peek under the covers using a DOM Inspector. Instead of using .animate(), we are using CSS3 animation styles to move our square-small <div>. Most browsers will accept the use of transition and transform, but it is worth running our code through a process, such as Autocomplete, to ensure we apply the right vendor prefixes to our code. The beauty about using CSS3 here is that while it might not suit large, complex animations, we can at least begin to incorporate the use of external stylesheets, such as Animate.css, or even use a preprocessor, such as SASS to create our styles. It's an easy change to make, so without further ado and as the next step on our journey to using PostCSS, let's take a look at this in more detail. If you would like to create custom keyframe-based animations, then take a look at http://cssanimate.com/, which provides a GUI-based interface for designing them and will pipe out the appropriate code when requested! Making use of prebuilt libraries Up to this point, all of our animations have had one thing in common; they are individually created and stored within the same stylesheet as other styles for each project. This will work perfectly well, but we can do better. After all, it's possible that we may well create animations that others have already built! Over time, we may also build up a series of animations that can form the basis of a library that can be reused for future projects. A number of developers have already done this. One example of note is the Animate.css library created by Dan Eden. In the meantime, let's run through a quick demo of how it works as a precursor to working with it in PostCSS. The images used in this demo are referenced directly from the LoremPixem website as placeholder images. Let's make a start: We'll start by extracting a copy of the T37 folder from the code download bundle. Save the folder to our project area. Next, open a new file and add the following code: body { background: #eee; } #gallery {   width: 745px;   height: 500px;   margin-left: auto;   margin-right: auto; }   #gallery img {   border: 0.25rem solid #fff;   margin: 20px;   box-shadow: 0.25rem 0.25rem 0.3125rem #999;   float: left; } .animated {   animation-duration: 1s; animation-fill-mode: both; } .animated:hover {   animation-duration: 1s;   animation-fill-mode: both; }  Save this as style.css in the css subfolder within the T37 folder. Go ahead and preview the results in a browser. If all is well, then we should see something akin to this screenshot: If we run the demo, we should see images run through different types of animation; there is nothing special or complicated here. The question is though, how does it all fit in with PostCSS? Well, there's a good reason for this; there will be some developers who have used Animate.css in the past and will be familiar with how it works; we will also be using a the postcss-animation plugin later in Updating code to use PostCSS, which is based on the Animate.css stylesheet library. For those of you who are not familiar with the stylesheet library though, let's quickly run through how it works within the context of our demo. Dissecting the code to our demo The effects used in our demo are quite striking. Indeed, one might be forgiven for thinking that they required a lot of complex JavaScript! This, however, could not be further from the truth. The Animate.css file contains a number of animations based on @keyframe similar to this: @keyframes bounce {   0%, 20%, 50%, 80%, 100% {transform: translateY(0);}   40% {transform: translateY(-1.875rem);}   60% {transform: translateY(-0.9375rem);} } We pull in the animations using the usual call to the library within the <head> section of our code. We can then call any animation by name from within our code:   <div id="gallery">     <a href="#"><img class="animated bounce" src="http://lorempixum.com/200/200/city/1" alt="" /></a> ...   </div>   </body> You will notice the addition of the .animated class in our code. This controls the duration and timing of the animation, which are set according to which animation name has been added to the code. The downside of not using JavaScript (or jQuery for that matter) means that the animation will only run once when the demo is loaded; we can set it to run continuously by adding the .infinite class to the element being animated (this is part of the Animate library). We can fake a click option in CSS, but it is an experimental hack that is not supported across all the browsers. To affect any form of control, we really need to use JavaScript (or even jQuery)! If you are interested in details of the hack, then take a look at this response on Stack Overflow at http://stackoverflow.com/questions/13630229/can-i-have-an-onclick-effect-in-css/32721572#32721572. Okay! Onward we go. We've covered the basic use of prebuilt libraries, such as Animate. It's time to step up a gear and make the transition to PostCSS. Summary In this article, we studied about recap on the use of jQuery to animate content. We also looked into switching to CSS-based animation. At last, we saw how to make use of prebuilt libraries in short.  Resources for Article:   Further resources on this subject: Responsive Web Design with HTML5 and CSS3 - Second Edition [article] Professional CSS3 [article] Instant LESS CSS Preprocessor How-to [article]
Read more
  • 0
  • 0
  • 9929

article-image-data-science-r
Packt
04 Jul 2016
16 min read
Save for later

Data Science with R

Packt
04 Jul 2016
16 min read
In this article by Matthias Templ, author of the book Simulation for Data Science with R, we will cover: What is meant bydata science A short overview of what Ris The essential tools for a data scientist in R (For more resources related to this topic, see here.) Data science Looking at the job market it is no doubt that the industry needs experts on data science. But what is data science and what's the difference to statistics or computational statistics? Statistics is computing with data. In computational statistics, methods and corresponding software are developed in a highly data-depended manner using modern computational tools. Computational statistics has a huge intersection with data science. Data science is the applied part of computational statistics plus data management including storage of data, data bases, and data security issues. The term data science is used when your work is driven by data with a less strong component on method and algorithm development as computational statistics, but with a lot of pure computer science topics related to storing, retrieving, and handling data sets. It is the marriage of computer science and computational statistics. As an example to show differences, we took the broad area of visualization. A data scientist is also interested in pure process related visualizations (airflows in an engine, for example),while in computational statistics, methods for visualization of data and statistical results are onlytouched upon. Data science is the management of the entire modelling process, from data collection to automatized reporting and presenting the results. Storage and managing data, data pre-processing (editing, imputation), data analysis, and modelling are included in this process. Data scientists use statistics and data-oriented computer science tools to solve the problems they face. R R has become an essential tool for statistics and data science(Godfrey 2013). As soon as data scientists have to analyze data, R might be the first choice. The opensource programming language and software environment, R, is currently one of the most widely used and popular software tools for statistics and data analysis. It is available at the Comprehensive R Archive Network (CRAN) as free software under the terms of the Free Software Foundation's GNU General Public License (GPL) in source code and binary form. The R Core Team defines R as an environment. R is an integrated suite of software facilities for data manipulation, calculation, and graphical display. Base R includes: A suite of operators for calculations on arrays, mostly written in C and integrated in R Comprehensive, coherent, and integrated collection of methods for data analysis Graphical facilities for data analysis and display, either on-screen or in hard copy A well-developed, simple, and effective programming language thatincludes conditional statements, loops, user-defined recursive functions, and input and output facilities A flexible object-oriented system facilitating code reuse High performance computing with interfaces to compiled code and facilities for parallel and grid computing The ability to be extended with (add-on) packages An environment that allows communication with many other software tools Each R package provides a structured standard documentation including code application examples. Further documents(so called vignettes???)potentially show more applications of the packages and illustrate dependencies between the implemented functions and methods. R is not only used extensively in the academic world, but also companies in the area of social media (Google, Facebook, Twitter, and Mozilla Corporation), the banking world (Bank of America, ANZ Bank, Simple), food and pharmaceutical areas (FDA, Merck, and Pfizer), finance (Lloyd, London, and Thomas Cook), technology companies (Microsoft), car construction and logistic companies (Ford, John Deere, and Uber), newspapers (The New York Times and New Scientist), and companies in many other areas; they use R in a professional context(see also, Gentlemen 2009andTippmann 2015). International and national organizations nowadays widely use R in their statistical offices(Todorov and Templ 2012 and Templ and Todorov 2016). R can be extended with add-on packages, and some of those extensions are especially useful for data scientists as discussed in the following section. Tools for data scientists in R Data scientists typically like: The flexibility in reading and writing data including the connection to data bases To have easy-to-use, flexible, and powerful data manipulation features available To work with modern statistical methodology To use high-performance computing tools including interfaces to foreign languages and parallel computing Versatile presentation capabilities for generating tables and graphics, which can readily be used in text processing systems, such as LaTeX or Microsoft Word To create dynamical reports To build web-based applications An economical solution The following presented tools are related to these topics and helps data scientists in their daily work. Use a smart environment for R Would you prefer to have one environment that includes types of modern tools for scientific computing, programming and management of data and files, versioning, output generation that also supports a project philosophy, code completion, highlighting, markup languages and interfaces to other software, and automated connections to servers? Currently two software products supports this concept. The first one is Eclipse with the extensionSTATET or the modified Eclipse IDE from Open Analytics called Architect. The second is a very popular IDE for R called RStudio, which also includes the named features and additionally includes an integration of the packages shiny(RStudio, Inc. 2014)for web-based development and integration of R and rmarkdown(Allaire et al. 2015). It provides a modern scientific computing environment, well designed and easy to use, and most importantly, distributed under GPL License. Use of R as a mediator Data exchange between statistical systems, database systems, or output formats is often required. In this respect, R offers very flexible import and export interfaces either through its base installation but mostly through add-on packages, which are available from CRAN or GitHub. For example, the packages xml2(Wickham 2015a)allow to read XML files. For importing delimited files, fixed width files, and web log files, it is worth mentioning the package readr(Wickham and Francois 2015a)or data.table(Dowle et al. 2015)(functionfread), which are supposed to be faster than the available functions in base R. The packages XLConnect(Mirai Solutions GmbH 2015)can be used to read and write Microsoft Excel files including formulas, graphics, and so on. The readxlpackage(Wickham 2015b)is faster for data import but do not provide export features. The foreignpackages(R Core Team 2015)and a newer promising package called haven(Wickham and Miller 2015)allow to read file formats from various commercial statistical software. The connection to all major database systems is easily established with specialized packages. Note that theROBDCpackage(Ripley and Lapsley 2015)is slow but general, while other specialized packages exists for special data bases. Efficient data manipulation as the daily job Data manipulation, in general but in any case with large data, can be best done with the dplyrpackage(Wickham and Francois 2015b)or the data.tablepackage(Dowle et al. 2015). The computational speed of both packages is much faster than the data manipulation features of base R, while data.table is slightly faster than dplyr using keys and fast binary search based methods for performance improvements. In the author's viewpoint, the syntax of dplyr is much easier to learn for beginners as the base R data manipulation features, and it is possible to write thedplyr syntax using data pipelines that is internally provided by package magrittr(Bache and Wickham 2014). Let's take an example to see the logical concept. We want to compute a new variableEngineSizeas the square ofEngineSizefrom the data set Cars93. For each group, we want to compute the minimum of the new variable. In addition, the results should be sorted in descending order: data(Cars93, package = "MASS") library("dplyr") Cars93 %>%   mutate(ES2 = EngineSize^2) %>%   group_by(Type) %>%   summarize(min.ES2 = min(ES2)) %>%   arrange(desc(min.ES2)) ## Source: local data frame [6 x 2] ## ##      Type min.ES2 ## 1   Large   10.89 ## 2     Van    5.76 ## 3 Compact    4.00 ## 4 Midsize    4.00 ## 5  Sporty    1.69 ## 6   Small    1.00 The code is somehow self-explanatory, while data manipulation in base R and data.table needs more expertise on syntax writing. In the case of large data files thatexceed available RAM, interfaces to (relational) database management systems are available, see the CRAN task view on high-performance computingthat includes also information about parallel computing. According to data manipulation, the excellent packages stringr, stringi, and lubridate for string operations and date-time handling should also be mentioned. The requirement of efficient data preprocessing A data scientist typically spends a major amount of time not only ondata management issues but also on fixing data quality problems. It is out of the scope of this book to mention all the tools for each data preprocessing topic. As an example, we concentrate on one particular topic—the handling of missing values. The VIMpackage(Templ, Alfons, and Filzmoser 2011)(Kowarik and Templ 2016)can be used for visual inspection and imputation of data. It is possible to visualize missing values using suitable plot methods and to analyze missing values' structure in microdata using univariate, bivariate, multiple, and multivariate plots. The information on missing values from specified variables is highlighted in selected variables. VIM can also evaluate imputations visually. Moreover, the VIMGUIpackage(Schopfhauser et al., 2014)provides a point and click graphical user interface (GUI). One plot, a parallel coordinate plot, for missing values is shown in the following graph. It highlights the values on certain chemical elements. In red, those values are marked that contain the missing in the chemical element Bi. It is easy to see missing at random situations with such plots as well as to detect any structure according to the missing pattern. Note that this data is compositional thus transformed using a log-ratio transformation from the package robCompositions(Templ, Hron, and Filzmoser 2011): library("VIM") data(chorizonDL, package = "VIM") ## for missing values x <- chorizonDL[,c(15,101:110)] library("robCompositions") x <- cenLR(x)$x.clr parcoordMiss(x,     plotvars=2:11, interactive = FALSE) legend("top", col = c("skyblue", "red"), lwd = c(1,1),     legend = c("observed in Bi", "missing in Bi")) To impute missing values,not onlykk-nearest neighbor and hot-deck methods are included, but also robust statistical methods implemented in an EMalgorithm, for example, in the functionirmi. The implemented methods can deal with a mixture of continuous, semi-continuous, binary, categorical, and count variables: any(is.na(x)) ## [1] TRUE ximputed <- irmi(x) ## Time difference of 0.01330566 secs any(is.na(ximputed)) ## [1] FALSE Visualization as a must While in former times, results were presented mostly in tables and data was analyzed by their values on screen; nowadays visualization of data and results becomes very important. Data scientists often heavily use visualizations to analyze data andalso for reporting and presenting results. It's already a nogo to not make use of visualizations. R features not only it's traditional graphical system but also an implementation of the grammar of graphics book(Wilkinson 2005)in the form of the R package(Wickham 2009). Why a data scientist should make use of ggplot2? Since it is a very flexible, customizable, consistent, and systematic approach to generate graphics. It allows to define own themes (for example, cooperative designs in companies) and support the users with legends and optimal plot layout. In ggplot2, the parts of a plot are defined independently. We do not go into details and refer to(Wickham 2009)or(???), but here's a simple example to show the user-friendliness of the implementation: library("ggplot2") ggplot(Cars93, aes(x = Horsepower, y = MPG.city)) + geom_point() + facet_wrap(~Cylinders) Here, we mapped Horsepower to the x variable and MPG.city to the y variable. We used Cylinder for faceting. We usedgeom_pointto tell ggplot2 to produce scatterplots. Reporting and webapplications Every analysis and report should be reproducible, especially when a data scientist does the job. Everything from the past should be able to compute at any time thereafter. Additionally,a task for a data scientist is to organize and managetext,code,data, andgraphics. The use of dynamical reporting tools raise the quality of outcomes and reduce the work-load. In R, the knitrpackage provides functionality for creating reproducible reports. It links code and text elements. The code is executed and the results are embedded in the text. Different output formats are possible such as PDF,HTML, orWord. The structuring can be most simply done using rmarkdown(Allaire et al., 2015). markdown is a markup language with many features, including headings of different sizes, text formatting, lists, links, HTML, JavaScript,LaTeX equations, tables, and citations. The aim is to generate documents from plain text. Cooperate designs and styles can be managed through CSS stylesheets. For data scientists, it is highly recommended to use these tools in their daily work. We already mentioned the automated generation from HTML pages from plain text with rmarkdown. The shinypackage(RStudio Inc. 2014)allows to build web-based applications. The website generated with shiny changes instantly as users modify inputs. You can stay within the R environment to build shiny user interfaces. Interactivity can be integrated using JavaScript, and built-in support for animation and sliders. Following is a very simple example that includes a slider and presents a scatterplot with highlighting of outliers given. We do not go into detail on the code that should only prove that it is just as simple to make a web application with shiny: library("shiny") library("robustbase") ## Define server code server <- function(input, output) {   output$scatterplot <- renderPlot({     x <- c(rnorm(input$obs-10), rnorm(10, 5)); y <- x + rnorm(input$obs)     df <- data.frame("x" = x, "y" = y)     df$out <- ifelse(covMcd(df)$mah > qchisq(0.975, 1), "outlier", "non-outlier")     ggplot(df, aes(x=x, y=y, colour=out)) + geom_point()   }) }   ## Define UI ui <- fluidPage(   sidebarLayout(     sidebarPanel(       sliderInput("obs", "No. of obs.", min = 10, max = 500, value = 100, step = 10)     ),     mainPanel(plotOutput("scatterplot"))   ) )   ## Shiny app object shinyApp(ui = ui, server = server) Building R packages First, RStudio and the package devtools(Wickham and Chang 2016)make life easy when building packages. RStudio has a lot of facilities for package building, and it's integrated package devtools includes features for checking, building, and documenting a package efficiently, and includes roxygen2(Wickham, Danenberg, and Eugster)for automated documentation of packages. When code of a package is updated,load_all('pathToPackage')simulates a restart of R, the new installation of the package and the loading of the newly build packages. Note that there are many other functions available for testing, documenting, and checking. Secondly, build a package whenever you wrote more than two functions and whenever you deal with more than one data set. If you use it only for yourself, you may be lazy with documenting the functions to save time. Packages allow to share code easily, to load all functions and data with one line of code, to have the documentation integrated, and to support consistency checks and additional integrated unit tests. Advice for beginners is to read the manualWriting R Extensions, and use all the features that are provided by RStudio and devtools. Summary In this article, we discussed essential tools for data scientists in R. This covers methods for data pre-processing, data manipulation, and tools for reporting, reproducible work, visualization, R packaging, and writing web-applications. A data scientist should learn to use the presented tools and deepen the knowledge in the proposed methods and software tools. Having learnt these lessons, a data scientist is well-prepared to face the challenges in data analysis, data analytics, data science, and data problems in practice. References Allaire, J.J., J. Cheng, Xie Y, J. McPherson, W. Chang, J. Allen, H. Wickham, and H. Hyndman. 2015.Rmarkdown: Dynamic Documents for R.http://CRAN.R-project.org/package=rmarkdown. Bache, S.M., and W. Wickham. 2014.magrittr: A Forward-Pipe Operator for R.https://CRAN.R-project.org/package=magrittr. Dowle, M., A. Srinivasan, T. Short, S. Lianoglou, R. Saporta, and E. Antonyan. 2015.Data.table: Extension of Data.frame.https://CRAN.R-project.org/package=data.table. Gentlemen, R. 2009. "Data Analysts Captivated by R's Power."New York Times.http://www.nytimes.com/2009/01/07/technology/business-computing/07program.html. Godfrey, A.J.R. 2013. "Statistical Analysis from a Blind Person's Perspective."The R Journal5 (1): 73–80. Kowarik, A., and M. Templ. 2016. "Imputation with the R Package VIM."Journal of Statistical Software. Mirai Solutions GmbH. 2015.XLConnect: Excel Connector for R.http://CRAN.R-project.org/package=XLConnect. R Core Team. 2015.Foreign: Read Data Stored by Minitab, S, SAS, SPSS, Stata, Systat, Weka, dBase, ….http://CRAN.R-project.org/package=foreign. Ripley, B., and M. Lapsley. 2015.RODBC: ODBC Database Access.http://CRAN.R-project.org/package=RODBC. RStudio Inc. 2014.Shiny: Web Application Framework for R.http://CRAN.R-project.org/package=shiny. Schopfhauser, D., M. Templ, A. Alfons, A. Kowarik, and B. Prantner. 2014.VIMGUI: Visualization and Imputation of Missing Values.http://CRAN.R-project.org/package=VIMGUI. Templ, M., A. Alfons, and P. Filzmoser. 2011. "Exploring Incomplete Data Using Visualization Techniques."Advances in Data Analysis and Classification6 (1): 29–47. Templ, M., and V. Todorov. 2016. "The Software Environment R for Official Statistics and Survey Methodology."Austrian Journal of Statistics45 (1): 97–124. Templ, M., K. Hron, and P. Filzmoser. 2011.RobCompositions: An R-Package for Robust Statistical Analysis of Compositional Data. John Wiley; Sons. Tippmann, S. 2015. "Programming Tools: Adventures with R."Nature, 109–10. doi:10.1038/517109a. Todorov, V., and M. Templ. 2012.R in the Statistical Office: Part II. Working paper 1/2012. United Nations Industrial Development. Wickham, H. 2009.Ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York.http://had.co.nz/ggplot2/book. 2015a.Xml2: Parse XML.http://CRAN.R-project.org/package=xml2. 2015b.Readxl: Read Excel Files.http://CRAN.R-project.org/package=readxl. Wickham, H., and W. Chang. 2016.Devtools: Tools to Make Developing R Packages Easier.https://CRAN.R-project.org/package=devtools. Wickham, H., and R. Francois. 2015a.Readr: Read Tabular Data.http://CRAN.R-project.org/package=readr. 2015b.dplyr: A Grammar of Data Manipulation.https://CRAN.R-project.org/package=dplyr. Wickham, H., and E. Miller. 2015.Haven: Import SPSS,Stata and SAS Files.http://CRAN.R-project.org/package=haven. Wickham, H., P. Danenberg, and M. Eugster.Roxygen2: In-Source Documentation for R.https://github.com/klutometis/roxygen. Wilkinson, L. 2005.The Grammar of Graphics (Statistics and Computing). Secaucus, NJ, USA: Springer-Verlag New York, Inc. Resources for Article: Further resources on this subject: Adding Media to Our Site [article] Data Tables and DataTables Plugin in jQuery 1.3 with PHP [article] JavaScript Execution with Selenium [article]
Read more
  • 0
  • 0
  • 3621

article-image-getting-started-vr-programming
Jake Rheude
04 Jul 2016
8 min read
Save for later

Getting Started with VR Programming

Jake Rheude
04 Jul 2016
8 min read
This guide will go through some simple programming for VR apps using the Google VR SDK (software development kit) and the Unity3D game engine. This guide will assume that you already have a mobile device capable of running Google VR apps with a Google Cardboard, as well as a computer able to run Unity3D. Getting Started First and foremost, download the latest version of Unity3D from their website. Out of the four options, select “Personal” since it costs nothing to the user. Then download and run the installer. The installation process is straightforward. However, you must make sure that you select the “Android Build Support” component if you are planning on using an Android device or “iOS Build Support” for an iOS device. If you are unsure at this point, just select both, as neither of them requires a lot of space. Now that you have Unity3D installed, the next step is to set it up for the Google VR SDK which can be found here. After agreeing to the terms and conditions, you will be given a link to download the repository directly. After downloading and extracting the ZIP file, you will notice that it contains a Unity Package file. Double-click on the file, and Unity will automatically load up. You will then see a window similar to the pop up below on your screen. Click the “NEW” button on the top right corner to begin your first Google VR project. Give it any project name other than the default “New Unity Project” name. For this guide, I have chosen “VR Programming Tutorial” as the project name.   As soon as your new project loads up, so will the Google VR SDK Unity Package. The relevant files should all be selected by default, so simply click the “Import” button on the bottom right corner to include the SDK into your project.   In your project’s “Assets” folder, there should be a folder named “GoogleVR”. This is where all the necessary components are located in order to begin working with the SDK.   From the “Assets” folder, go into “GoogleVR”->”DemoScenes”->”HeadSetDemo”. Double-click on the Unity icon that is named “DemoScene”. You should see something similar to this upon opening the scene file. This is where you can preview the scene before playing it to get an idea of how the game objects will be laid out in the environment. So let’s try that by clicking on the “Play” button. The scene will start out from the user’s perspective, which would be the main camera.   There is a slight difference in how the left eye and right eye camera are displaying the environment. This is called distortion correction, which is intentionally designed that way in order to accustom the display to the Google Cardboard eye lenses. You may be wondering why you are unable to look around with your mouse. This design is also intentional to allow the developer to hover the mouse pointer in and out of the game window without disrupting the scene while it is playing. In order to look around in the environment, hold down the Ctrl key, and then the Alt key to enable head movement. Make sure to press the keys in this order, otherwise you will only be rotating the display along the Z-axis. You might also be wondering where the interactive menu on the floor canvas has gone. The menu is still there, it’s just that it does not appear in VR mode. Notice that the dot in the center of the display will turn into a halo when you move it over the hovering cube. This happens whenever the dot is placed over a game object in the environment that is interactive. So even if the menu is not visible, you are still able to select the menu items. If you happen to click on the “VR Mode” button, the left eye and right eye cameras will simply go away and the main camera will be the only camera that displays the world space. VR Mode can be enabled/disabled by clicking on the "VR Mode Enabled" checkbox in the project's inspector. Simply select "GvrMain" in the DemoScene hierarchy to have the inspector display its information. How the scene is displayed when VR mode is disabled. Note that as of the current implementation of Google VR, it is impossible to add UI components into the world space. This is due to the stereoscopic functionality of Google VR and the mathematics involved in calculating the distance of the game objects from the left eye and right eye cameras relative to the world environment. However, it is possible to add non-interactive UI elements (i.e. player HUD) as a child 3D element with the main camera being its parent. If you wish to create interactive UI components, they must be done strictly as game objects in the world space. This also implies that the interactive UI components must be selected by the user from a fixed position in the world space, as they would find it difficult to make their selections otherwise. Now that we have gone over the basics of the Google VR SDK, let’s move onto some programming. Applying Attributes to Game Objects When creating an interactive medium of any kind (in this case a VR app), some of the most basic functions can end up being more complicated than they initially seem to be. We will demonstrate that by incorporating, what seems to be, simple user movement. In the same DemoScene scene, we will add four more cubes to the environment. For the sake of cleanliness, first we will remove the existing cube as it will be an obstruction for our new cube. To delete a game object from a scene, simply right-click it in the hierarchy and select "Delete". Now that we have removed the existing cube, add a new one by clicking "Create" in the hierarchy, select "3D Object" and then "Cube".   Move the cube about 4-5 units along the X or Z axis away from the origin. You can do so by clicking and dragging the red or blue arrow. Now that we have added our cube, the next step is to add a script to the player’s perspective object. For this project, we can use the “GvrMain” game object to incorporate the player’s movement. In the inspector tab, click on the "Add Component" button, select "New Script" and create a new script titled "MoveToCube".   Once the script has been created, click on the cogwheel icon and select "Edit Script".   Copy and paste this code below into MoveToCube.cs Next, add an Event Trigger component to your cube.   Create a new script titled "CubeSelect". Then select the cogwheel icon and select "Edit Script" to open the script in the script editor.   Copy and paste the code below into your CubeSelect.cs script.     Click on the "Add New Event Type" button. Select "PointerClick". Click the + icon to add a method to refer to. In the left box, select the "Cube" game object. For the method, select "CubeSelect" and then click on "GetCubePosition". Finally, select "GvrMain" as the target game object for the method. When you are finished adding the necessary components, copy and paste the cube in the project hierarchy tab three times in order to get four cubes. They will seem as if they did not appear on the scene, only because they are overlapping each other. Change the positions of each cube so that they are separated from each other along the X and Z axis. Once completed, the scene should look something similar to this: Now you can run the VR app and see for yourself that we have now incorporated player movement in this basic implementation. Tips and General Advice Many developers recommend that you do not incorporate any acceleration and/or deceleration to the main camera. Doing so will cause nausea to the users and thus give them a negative experience with your VR application. Keep your VR app relatively simple! The user only has two modes of input: head tracking and the Cardboard trigger button. Trying to force functionality with multiple gestures (i.e. looking straight down and/or up) will not be intuitive to the user and will more than likely cause frustration. About the Author Jake Rheude is the Director of Business Development for Red Stag Fulfillment, a US-based e-commerce fulfillment provider focused primarily on serving ecommerce businesses shipping heavy, large, or valuable products to customers all around the world. Red Stag is so confident in its fulfillment software combined with their warehouse operations, that for any error, inaccuracy, or late shipment, not only will they reimburse you for that order, but they’ll write you a check for $50.
Read more
  • 0
  • 0
  • 28071

article-image-angular-2-components-making-app-development-easier
Mary Gualtieri
30 Jun 2016
5 min read
Save for later

Angular 2 Components: Making app development easier

Mary Gualtieri
30 Jun 2016
5 min read
When Angular 2 was announced in October 2014, the JavaScript community went into a frenzy. What was going to happen to the beloved Angular 1, which so many developers loved? Could change be a good thing? Change only means improvement, so we should welcome Angular 2 with open arms and embrace it. One of the biggest changes from Angular 1 to Angular 2 was the purpose of a component. In Angular 2, components are the main way to build and specify logic on a page; or in other words, components are where you define a specific responsibility of your application. In Angular 1, this was achieved through directives, controllers, and scope. With this change, Angular 2 offers better functionality and actually makes it easier to build applications. Angular 2 components also ensure that code from different sections of the application will not interfere with other sections of the component. To build a component, let’s first look at what is needed to create it: An association of DOM/host elements. Well-defined input and output properties of public APIs. A template that describes how the component is rendered on the HTML page. Configuration of the dependency injection. Input and output properties Input and Output properties are considered the public API of a component, allowing you to access the backend data. The input property is where data flows into a component. Data flows out of the component through the output property. The purpose of the input and output propertiesis to represent a component in your application. Template A template is needed in order to render a component on a page. In order to render the template, you must have a list of directives that can be used in the template. Host elements In order for a component to be rendered in DOM, the component must associate itself with a DOM or host element. A component can interact with host elements by listening to its events, updating properties, and invoking methods. Dependency Injection Dependency Injection is when a component depends on a service. You request this service through a constructor, and the framework provides you with this service. This is significant because you can depend on interfaces instead of concrete types. The benefit of this enables testability and more control. Dependency Injections is created and configured in directives and component decorators. Bootstrap In Angular,you have to bootstrap in order to initialize your application through automation or by manually initializing it. In Angular 1, to automatically bootstrap your app, you added ng-app into your HTML file. To manually bootstrap your app, you would add angular.bootstrap(document, [‘myApp’});. In Angular 2, you can bootstrap by just adding bootstrap();. It’s important to remember that bootstrapping in Angular is completely differentfromTwitter Bootstrap. Directives Directives are essentially components without a template. The purpose behind a directive is to allow components to interact with one another. Another way to think of a directive is a component with a template. You still have the option to write a directive with a decorator. Selectors Selectors are very easy to understand. Use a selector in order for Angular to identify the component. The selector is used to call the component into the HTML file. For example, if your selector is called App, you can use <app> </app> to call the component in the HTML file. Let’s build a simple component! Let’s walk through the steps required to actually build a component using Angular 2: Step 1: add a component decorator: Step 2: Add a selector: In your HTML file, use <myapp></myapp> to call the template. Step 3: Add a template: Step 4: Add a class to represent the component: Step 5: Bootstrap the component class: Step 6: Finally, import both the bootstrap and Component file: This is a root component. In Angular, you have what is called a component tree. Everything comes back to the component tree. The question that you must ask yourself is: what does a component looks like if it is not a root component? Perform the following steps for this: Step 1: Add an import component: For every component that you create, it is important to add “import {Component} from "angular2/core";” Step 2: Add a selector and a template: Step 3: Export the class that represents the component: Step 4: Switch to the root component. Then, import the component file: Add the relative path(./todo) to the file. Step 5: Add an array of directives to the root component in order to be able to use the component: Let’s review. In order to make a component, you must associate host elements, have well-defined input and output properties, have a template, and configure Dependency Injection. This is all achieved through the use of selectors, directives, and a template. Angular 2 is still in the beta stages, but once it arrives, it will be a game changer for the development world. About the author Mary Gualtieri is a full stack web developer and web designer who enjoys all aspects of the web and creating a pleasant user experience. Web development, specifically frontend development, is an interest of hers because it challenges her to think outside of the box and solve problems, all while constantly learning. She can be found on GitHub as MaryGualtieri
Read more
  • 0
  • 0
  • 5898
article-image-voice-interaction-and-android-marshmallow
Raka Mahesa
30 Jun 2016
6 min read
Save for later

Voice Interaction and Android Marshmallow

Raka Mahesa
30 Jun 2016
6 min read
"Jarvis, play some music." You might imagine that to be a quote from some Iron Man stories (and hey, that might be an actual quote), but if you replace the "Jarvis" part with "OK Google," you'll get an actual line that you can speak to your Android phone right now that will open a music player and play a song. Go ahead and try it out yourself. Just make sure you're on your phone's home screen when you do it. This feature is called Voice Action, and it was actually introduced years ago in 2010, though back then it only worked on certain apps. However, Voice Action only accepts a single-line voice command, unlike Jarvis who usually engages in a conversation with its master. For example, if you ask Jarvis to play music, it will probably reply by asking what music you want to play. Fortunately, this type of conversation will no longer be limited to movies or comic books, because with Android Marshmallow, Google has introduced an API for that: the Voice Interaction API. As the name implies, the Voice Interaction API enables you to add voice-based interaction to its app. When implemented properly, the user will be able to command his/her phone to do a particular task without any touch interaction just by having a conversation with the phone. Pretty similar to Jarvis, isn't it? So, let's try it out! One thing to note before beginning: the Voice Interaction API can only be activated if the app is launched using Voice Action. This means that if the app is opened from the launcher via touch, the API will return a null object and cannot be used on that instance. So let’s cover a bit of Voice Action first before we delve further into using the Voice Interaction API. Requirements To use the Voice Interaction API, you need: Android Studio v1.0 or above Android 6.0 (API 23) SDK A device with Android Marshmallow installed (optional) Voice Action Let's start by creating a new project with a blank activity. You won’t use the app interface and you can use the terminal logging to check what app does, so it's fine to have an activity with no user interface here. Okay, you now have the activity. Let’s give the user the ability to launch it using a voice command. Let's pick a voice command for our app—such as a simple "take a picture" command? This can be achieved by simply adding intent filters to the activity. Add these lines to your app manifest file and put them below the original intent filter of your app activity. <intent-filter> <action android_name="android.media.action.STILL_IMAGE_CAMERA" /> <category android_name="android.intent.category.DEFAULT" /> <category android_name="android.intent.category.VOICE" /> </intent-filter> These lines will notify the operating system that your activity should be triggered when a certain voice command is spoken. The action "android.media.action.STILL_IMAGE_CAMERA" is associated with the "take a picture" command, so to activate the app using a different command, you need to specify a different action. Check out this list if you want to find out what other commands are supported. And that's all you need to do to implement Voice Action for your app. Build the app and run it on your phone. So when you say "OK Google, take a picture", your activity will show up. Voice Interaction All right, let's move on to Voice Interaction. When the activity is created, before you start the voice interaction part, you must always check whether the activity was started from Voice Action and whether the VoiceInteractor service is available. To do that, call the isVoiceInteraction() function to check the returned value. If it returns true, then it means the service is available for you to use. Let's say you want your app to first ask the user which side he/she is on, then changes the app background color accordingly. If the user chooses the dark side, the color will be black, but if the user chooses the light side, the app color will be white. Sounds like a simple and fun app, doesn't it? So first, let’s define what options are available for users to choose. You can do this by creating an instance of VoiceInteractor.PickOptionRequest.Option for each available choice. Note that you can associate more than one word with a single option, as can be seen in the following code. VoiceInteractor.PickOptionRequest.Option option1 = new VoiceInteractor.PickOptionRequest.Option(“Light”, 0); option1.addSynonym(“White”); option1.addSynonym(“Jedi”); VoiceInteractor.PickOptionRequest.Option option2 = new VoiceInteractor.PickOptionRequest.Option(“Dark”, 1); option12addSynonym(“Black”); option2.addSynonym(“Sith”); The next step is to define a Voice Interaction request and tell the VoiceInteractor service to execute that requests. For this app, use the PickOptionRequest for the request object. You can check out other request types on this page. VoiceInteractor.Option[] options = new VoiceInteractor.Option[] { option1, option2 } VoiceInteractor.Prompt prompt = new VoiceInteractor.Prompt("Which side are you on"); getVoiceInteractor().submitRequest(new PickOptionRequest(prompt, options, null) { //Handle each option here }); And determine what to do based on the choice picked by the user. This time, we'll simply check the index of the selected option and change the app background color based on that (we won't delve into how to change the app background color here; let's leave it for another occasion). @Override public void onPickOptionResult(boolean finished, Option[] selections, Bundle result) { if (finished && selections.length == 1) { if (selections[0].getIndex() == 0) changeBackgroundToWhite(); else if (selections[0].getIndex() == 1) changeBackgroundToBlack(); } } @Override public void onCancel() { closeActivity(); } And that's it! When you run your app on your phone, it should ask which side you're on if you launch it using Voice Action. You've only learned the basics here, but this should be enough to add a little voice interactivity to your app. And if you ever want to create a Jarvis version, you just need to add "sir" to every question your app asks. About the author Raka Mahesa is a game developer at Chocoarts who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also tweets regularly as @legacy99.
Read more
  • 0
  • 0
  • 15623

article-image-development-tricks-unreal-engine-4
Packt
22 Jun 2016
39 min read
Save for later

Development Tricks with Unreal Engine 4

Packt
22 Jun 2016
39 min read
In this article by Benjamin Carnall, the author of Unreal Engine 4 by Example, we will look at some development tricks with Unreal Engine 4. (For more resources related to this topic, see here.) Creating the C++ world objects With the character constructed, we can now start to build the level. We are going to create a block out for the lanes that we will be using for the level. We can then use this block to construct a mesh that we can reference in code. Before we get into the level creation, we should ensure the functionality that we implemented for the character works as intended. With the BountyDashMap open, navigate to the C++ classes folder of the content browser. Here, you will be able to see the BountyDashCharacter. Drag and drop the character into the game level onto the platform. Then, search for TargetPoint in the Modes Panel. Drag and drop three of these target points into the game level, and you will be presented with the following: Now, press the Play button to enter the PIE (Play in Editor) mode. The character will be automatically possessed and used for input. Also, ensure that when you press A or D, the character moves to the next available target point. Now that we have the base of the character implemented, we should start to build the level. We require three lanes for the player to run down and obstacles for the player to dodge. For now, we must focus on the lanes that the player will be running on. Let's start by blocking out how the lanes will appear in the level. Drag a BSP box brush into the game world. You can find the Box brush in the Modes Panel under the BSP section, which is under the name called Box. Place the box at world location (0.0f, 0.0f, and -100.0f). This will place the box in the center of the world. Now, change the X Property of the box under the Brush settings section of the Details panel to 10000. We require this lane to be so long so that later on, we can hide the end using fog without obscuring the objects that the player will need to dodge. Next, we need to click and drag two more copies of this box. You can do this by holding Alt while moving an object via the transform widget. Position one box copy at world location (0.0f, -230.0f, -100) and the next at (0.0f, 230, -100). The last thing we need to do to finish blocking the level is place the Target Points in the center of each lane. You will be presented with this when you are done: Converting BSP brushes into a static mesh The next thing we need to do is convert the lane brushes we made into one mesh, so we can reference it within our code base. Select all of the boxes in the scene. You can do this by holding Ctrl while selecting the box brushes in the editor. With all of the brushes selected, address the Details panel. Ensure that the transformation of your selection is positioned in the middle of these three brushes. If it is not, you can either reselect the brushes in a different order, or you can group the brushes by pressing Ctrl + G while the boxes are selected. This is important as the position of the transform widget shows what the origin of the generated mesh will be. With the grouping or boxes selected, address the Brush Settings section in the Details Panel there is a small white expansion arrow at the bottom of the section; click on this now. You will then be presented with a create static mesh button; press this now. Name this mesh Floor_Mesh_BountyDash, and save it under the Geometry/Meshes/ of the content folder. Smoke and Mirrors with C++ objects We are going to create the illusion of movement within our level. You may have noticed that we have not included any facilities in our character to move forward in the game world. This is because our character will never advance past his X positon at 0. Instead, we are going to be moving the world toward and past him. This way, we can create very easy spawning and processing logic for the obstacles and game world, without having to worry about continuously spawning objects that the player can move past further and further down the X axis. We require some of the level assets to move through the world, so we can establish the illusion of movement for the character. One of these moving objects will be the floor. This requires some logic that will reposition floor meshes as they reach a certain depth behind the character. We will be creating a swap chain of sorts that will work with three meshes. The meshes will be positioned in a contiguous line. As the meshes move underneath and behind the player, we need to move any mesh that is far enough behind the player’s back to the front of the swap chain. The effect is a never-ending chain of floor meshes constantly flowing underneath the player. The following diagram may help to understand the concept: Obstacles and coin pickups will follow a similar logic. However, they will simply be destroyed upon reaching the Kill point in the preceding diagram. Modifying the BountyDashGameMode Before we start to create code classes that will feature in our world, we are going to modify the BountyDashGameMode that was generated when the project was created. The game mode is going to be responsible for all of the game state variables and rules. Later on, we are going to use the game mode to determine how the player respawns when the game is lost. BountyDashGameMode Class Definition The game mode is going to be fairly simple; we are going to a add a few member variables that will hold the current state of the game, such as game speed, game level, and the number of coins needed to increase the game speed. Navigate to BountyDashGameMode.h and add the following code: UCLASS(minimalapi) class ABountyDashGameMode : public AGameMode { GENERATED_BODY()   UPROPERTY() float gameSpeed;   UPROPERTY() int32 gameLevel; As you can see, we have two private member variables called gameSpeed and gameLevel. These are private, as we wish no other object to be able to modify the contents of these values. You will also note that the class has been specified with minimalapi. This specifier effectively informs the engine that other code modules will not need information from this object outside of the class type. This means you will be able to cast this class type, but functions cannot be called within other modules. This is specified as a way to optimize compile times, as no module outside of this project API will require interactions with our game mode. Next, we declare the public functions and members that we will be using within our game mode. Add the following code to the ABountyDashGameMode class definition: public: ABountyDashGameMode();       void CharScoreUp(unsigned int charScore);       UFUNCTION()     float GetInvGameSpeed();       UFUNCTION()     float GetGameSpeed();       UFUNCTION()     int32 GetGameLevel(); The function called CharScroreUp()takes in the player’s current score (held by the player) and changes game values based on this score. This means we are able to make the game more difficult as the player scores more points. The next three functions are simply the accessor methods that we can use to get the private data of this class in other objects. Next, we need to declare our protected members that we have exposed to be EditAnywhere, so we may adjust these from the editor for testing purposes: protected:   UPROPERTY(EditAnywhere, BlueprintReadOnly) int32 numCoinsForSpeedIncrease;   UPROPERTY(EditAnywhere, BlueprintReadWrite) float gameSpeedIncrease;   }; The numCoinsForSpeedIncrease variable will determine how many coins it takes to increase the speed of the game, and the gameSpeedIncrease value will determine how much faster the objects move when the numCoinsForSpeedIncrease value has been met. BountyDashGameMode Function Definitions Let's begin add some definitions to the BountyDashGameMode functions. They will be very simple at this point. Let's start by providing some default values for our member variables within the constructor and by assigning the class that is to be used for our default pawn. Add the definition for the ABountyDashGameMode constructor: ABountyDashGameMode::ABountyDashGameMode() {     // set default pawn class to our ABountyDashCharacter     DefaultPawnClass = ABountyDashCharacter::StaticClass();       numCoinsForSpeedIncrease = 5;     gameSpeed = 10.0f;     gameSpeedIncrease = 5.0f;     gameLevel = 1; } Here, we are setting the default pawn class; we are doing this by calling StaticClass() on the ABountyDashCharacter. As we have just referenced, the ABountyDashCharacter type ensures that #include BountyDashCharacter.h is added to the BountyDashGameMode.cpp include list. The StaticClass() function is provided by default for all objects, and it returns the class type information of the object as a UClass*. We then establish some default values for member variables. The player will have to pick up five coins to increase the level. The game speed is set to 10.0f (10m/s), and the game will speed up by 5.0f (5m/s) every time the coin quota is reached. Next, let's add a definition for the CharScoreUp() function: void ABountyDashGameMode::CharScoreUp(unsignedintcharScore) {     if (charScore != 0 &&        charScore % numCoinsForSpeedIncrease == 0)     {         gameSpeed += gameSpeedIncrease;         gameLevel++;     } } This function is quite self-explanatory. The character's current score is passed into the function. We then check whether the character's score is not currently 0, and we check if the remainder of our character score is 0 after being divided by the number of coins needed for a speed increase; that is, if it is divided equally, thus the quota has been reached. We then increase the game speed by the gameSpeedIncrease value and then increment the level. The last thing we need to add is the accessor methods described previously. They do not require too much explanation short of the GetInvGameSpeed() function. This function will be used by objects that wish to be pushed down the X axis at the game speed: float ABountyDashGameMode::GetInvGameSpeed() {     return -gameSpeed; }   float ABountyDashGameMode::GetGameSpeed() {     return gameSpeed; }   int32 ABountyDashGameMode::GetGameLevel() {     return gameLevel; } Getting our game mode via Template functions The ABountyDashGame mode now contains information and functionality that will be required by most of the BountyDash objects we create going forward. We need to create a light-weight method to retrieve our custom game mode, ensuring that the type information is preserved. We can do this by creating a template function that will take in a world context and return the correct game mode handle. Traditionally, we could just use a direct cast to ABountyDashGameMode; however, this would require including BountyDashGameMode.h in BountyDash.h. As not all of our objects will require the knowledge of the game mode, this is wasteful. Navigate to the BoutyDash.h file now. You will be presented with the following: #pragma once   #include "Engine.h" What currently exists in the file is very simple—#pragma once has again been used to ensure that the compiler only builds and includes the file once. Then Engine.h has been included, so every other object in BOUNTYDASH_API (they include BountyDash.h by default) has access to the functions within Engine.h. This is a good place to include utility functions that you wish all objects to have access to. In this file, include the following lines of code: template<typename T> T* GetCustomGameMode(UWorld* worldContext) {     return Cast<T>(worldContext->GetAuthGameMode()); } This code, simply put, is a template function that takes in a game world handle. It gets the game mode from this context via the GetAuthGameMode() function, and then casts this game mode to the template type provided to the function. We must cast to the template type as the GetAuthGameMode() simply returns a AGameMode handle. Now, with this in place, let's begin coding our never ending floor. Coding the floor The construction of the floor will be quite simple in essence, as we only need a few variables and a tick function to achieve the functionality we need. Use the class wizard to create a class named Floor that inherits from AActor. We will start by modifying the class definition found in Floor.h. Navigate to this file now. Floor Class Definition The class definition for the floor is very basic. All we need is a Tick function and some accessor methods, so we may provide some information about the floor to other objects. I have also removed the BeginPlay function provided by default by the class wizard as it is not needed. The following is what you will need to write for the AFloor class definition in its entirety, replace what is present in Floor.h with this now (keeping the #include list intact): UCLASS() class BOUNTYDASH_API AFloor : public AActor { GENERATED_BODY()     public:      // Sets default values for this actor's properties     AFloor();         // Called every frame     virtual void Tick( float DeltaSeconds ) override;       float GetKillPoint();     float GetSpawnPoint();   protected:     UPROPERTY(EditAnywhere)     TArray<USceneComponent*> FloorMeshScenes;       UPROPERTY(EditAnywhere)     TArray<UStaticMeshComponent*> FloorMeshes;       UPROPERTY(EditAnywhere)     UBoxComponent* CollisionBox;       int32 NumRepeatingMesh;     float KillPoint;     float SpawnPoint; }; We have three UPROPERTY declared members. The first two being TArrays that will hold handles to the USceneComponent and UMeshComponent objects that will make up the floor. We require the TArray of scene components as the USceneComponentobjects provide us with a world transform that we can apply translations to, so we may update the position of the generated floor mesh pieces. The last UPROPERTY is a collision box that will be used for the actual player collisions to prevent the player from falling through the moving floor. The reason we are using BoxComponent instead of the meshes for collision is because we do not want the player to translate with the moving meshes. Due to surface friction simulation, having the character to collide with any of the moving meshes will cause the player to move with the mesh. The last three members are protected and do not require any UPROPRTY specification. We are simply going to use the two float values—KillPoint and SpawnPoint—to save output calculations from the constructor, so we may use them in the Tick() function. The integer value called NumRepeatingMesh will be used to determine how many meshes we will have in the chain. Floor Function Definitions As always, we will start with the constructor of the floor. We will be performing the bulk of our calculations for this object here. We will be creating USceneComponents and UMeshComponents we are going to use to make up our moving floor. With dynamic programming in mind, we should establish the construction algorithm, so we can create any number of meshes in the moving line. Also, as we will be getting the speed of the floors movement form the game mode, ensure that #include “BountyDashGameMode.h” is included in Floor.cpp The AFloor::AFloor() constructor Start this by adding the following lines to the AFloor constructor called AFloor::AFloor(), which is found in Floor.cpp: RootComponent =CreateDefaultSubobject<USceneComponent>(TEXT("Root"));   ConstructorHelpers::FObjectFinder<UStaticMesh>myMesh(TEXT( "/Game/Barrel_Hopper/Geometry/Floor_Mesh_BountyDash.Floor_Mesh_BountyDash"));   ConstructorHelpers::FObjectFinder<UMaterial>myMaterial(TEXT( "/Game/StarterContent/Materials/M_Concrete_Tiles.M_Concrete_Tiles")); To start with, we are simply using FObjectFinders to find the assets that we require for the mesh. For the myMesh finder, ensure that you parse the reference location of the static floor mesh that we created earlier. We also created a scene component to be used as the root component for the floor object. Next, we are going to be checking the success of the mesh acquisition and then establishing some variables for the mesh placement logic: if (myMesh.Succeeded()) {     NumRepeatingMesh = 3;       FBoxSphereBounds myBounds = myMesh.Object->GetBounds();     float XBounds = myBounds.BoxExtent.X * 2;     float ScenePos = ((XBounds * (NumRepeatingMesh - 1)) / 2.0f) * -1;       KillPoint = ScenePos - (XBounds * 0.5f);     SpawnPoint = (ScenePos * -1) + (XBounds * 0.5f); Note that we have just opened an if statement without closing the scope; from time to time, I will split the segments of the code within a scope across multiple pages. If you are ever lost as to the current scope that we are working from, then look for this comment; <-- Closing If(MyMesh.Succed()) or in the future a similarly named comment. Firstly, we are initializing the NumRepeatingMesh value with 3. We are using a variable here instead of a hard coded value so that we may update the number of meshes in the chain without having to refactor the remaining code base. We then get the bounds of the mesh object using the function called GetBounds() on the mesh asset that we just retrieved. This returns the FBoxSphereBounds structure, which will provide you with all of the bounding information of a static mesh asset. We then use the X component of the member called BoxExtent to initialize Xbounds. BoxExtent is a vector that holds the extent of the bounding box of this mesh. We save the X component of this vector, so we can use it for mesh chain placement logic. We have doubled this value, as the BoxExtent vector only represents the extent of the box from origin to one corner of the mesh. Meaning, if we wish the total bounds of the mesh, we must double any of the BoxExtent components. Next, we calculate the initial scene positon of the first USceneCompoennt we will be attaching a mesh to and storing in the ScenePos array. We can determine this position by getting the total length of all of the meshes in the (XBounds * (numRepeatingMesh – 1) chain and then halve the resulting value, so we can get the distance the first SceneComponent that will be from origin along the X axis. We also multiply this value by -1 to make it negative, as we wish to start our mesh chain behind the character (at the X position 0). We then use ScenePos to specify killPoint, which represents the point in space at which floor mesh pieces should get to, before swapping back to the start of the chain. For the purposes the swap chain, whenever a scene component is half a mesh piece length behind the position of the first scene component in the chain, it should be moved to the other side of the chain. With all of our variables in place, we can now iterate through the number of meshes we desire (3) and create the appropriate components. Add the following code to the scope of the if statement that we just opened: for (int i = 0; i < NumRepeatingMesh; ++i) { // Initialize Scene FString SceneName = "Scene" + FString::FromInt(i); FName SceneID = FName(*SceneName); USceneComponent* thisScene = CreateDefaultSubobject<USceneComponent>(SceneID); check(thisScene);   thisScene->AttachTo(RootComponent); thisScene->SetRelativeLocation(FVector(ScenePos, 0.0f, 0.0f)); ScenePos += XBounds;   floorMeshScenes.Add(thisScene); Firstly, we are creating a name for the scene component by appending Scene with the iteration value that we are up too. We then convert this appended FString to FName and provide this to the CreateDefaultSubobject template function. With the resultant USceneComponent handle, we call AttachTo()to bind it to the root component. Then, we set the RelativeLocation of USceneComponent. Here, we are parsing in the ScenePos value that we calculated earlier as the Xcomponent of the new relative location. The relative location of this component will always be based off of the position of the root SceneComponent that we created earlier. With the USceneCompoennt appropriately placed, we increment the ScenePos value by that of the XBounds value. This will ensure that subsequent USceneComponents created in this loop will be placed an entire mesh length away from the previous one, forming a contiguous chain of meshes attached to scene components. Lastly, we add this new USceneComponent to floorMeshScenes, so we may later perform translations on the components. Next, we will construct the mesh components and add the following code to the loop: // Initialize Mesh FString MeshName = "Mesh" + FString::FromInt(i); UStaticMeshComponent* thisMesh = CreateDefaultSubobject<UStaticMeshComponent>(FName(*MeshName)); check(thisMesh);   thisMesh->AttachTo(FloorMeshScenes[i]); thisMesh->SetRelativeLocation(FVector(0.0f, 0.0f, 0.0f)); thisMesh->SetCollisionProfileName(TEXT("OverlapAllDynamic"));   if (myMaterial.Succeeded()) {     thisMesh->SetStaticMesh(myMesh.Object);     thisMesh->SetMaterial(0, myMaterial.Object); }   FloorMeshes.Add(thisMesh); } // <--Closing For(int i = 0; i < numReapeatingMesh; ++i) As you can see, we have performed a similar name creation process for the UMeshComponents as we did for the USceneComponents. The preceding construction process was quite simple. We attach the mesh to the scene component so the mesh will follow any translation that we apply to the parent USceneComponent. We then ensure that the mesh's origin will be centered around the USceneComponent by setting its relative location to be (0.0f, 0.0f, 0.0f). We then ensure that meshes do not collide with anything in the game world; we do so with the SetCollisionProfileName() function. If you remember, when we used this function earlier, we provided a profile name that we wish edthe object to use the collision properties from. In our case, we wish this mesh to overlap all dynamic objects, thus we parse OverlapAllDynamic. Without this line of code, the character may collide with the moving floor meshes, and this will drag the player along at the same speed, thus breaking the illusion of motion we are trying to create. Lastly, we assign the static mesh object and material we obtained earlier with the FObjectFinders. We ensure that we add this new mesh object to the FloorMeshes array in case we need it later. We also close the loop scope that we created earlier. The next thing we are going to do is create the collision box that will be used for character collisions. With the box set to collide with everything and the meshes set to overlap everything, we will be able to collide on the stationary box while the meshes whip past under our feet. The following code will create a box collider: collisionBox =CreateDefaultSubobject<UBoxComponent>(TEXT("CollsionBox")); check(collisionBox);   collisionBox->AttachTo(RootComponent); collisionBox->SetBoxExtent(FVector(spawnPoint, myBounds.BoxExtent.Y, myBounds.BoxExtent.Z)); collisionBox->SetCollisionProfileName(TEXT("BlockAllDynamic"));   } // <-- Closing if(myMesh.Succeeded()) As you can see, we initialize UBoxComponent as we always initialize components. We then attach the box to the root component as we do not wish it to move. We also set the box extent to be that of the length of the entire swap chain by setting the spawnPoint value as the X bounds of the collider. We set the collision profile to BlockAllDynamic. This means it will block any dynamic actor, such as our character. Note that we have also closed the scope of the if statement opened earlier. With the constructor definition finished, we might as well define the accessor methods for spawnPoint and killPoint before we move onto theTick() function: float AFloor::GetKillPoint() {     return KillPoint; }   float AFloor::GetSpawnPoint() {     return SpawnPoint; } AFloor::Tick() Now, it is time to write the function that will move the meshes and ensure that they move back to the start of the chain when they reach KillPoint. Add the following code to the Tick() function found in Floor.cpp: for (auto Scene : FloorMeshScenes) { Scene->AddLocalOffset(FVector(GetCustomGameMode <ABountyDashGameMode>(GetWorld())->GetInvGameSpeed(), 0.0f, 0.0f));   if (Scene->GetComponentTransform().GetLocation().X <= KillPoint) {     Scene->SetRelativeLocation(FVector(SpawnPoint, 0.0f, 0.0f)); } } Here we use a C++ 11 range-based for the loop. Meaning that for each element inside of FloorMeshScenes, it will populate the scene handle of type auto with a pointer to whatever type is contained by FloorMeshScenes; in this case, it is USceneComponent*. For every scene component contained within FloorMeshScenes, we add a local offset to each frame. The amount we offset each frame is dependent on the current game speed. We are getting the game speed from the game mode via the template function that we wrote earlier. As you can see, we have specified the template function to be of the ABountyDashGameMode type, thus we will have access to the bounty dash game mode functionality. We have done this so that the floor will move faster under the player's feet as the speed of the game increases. The next thing we do is check the X value of the scene component’s location. If this value is equal to or less than the value stored in KillPoint, we reposition the scene component back to the spawn point. As we attached the meshes to USceenComponents earlier, the meshes will also translate with the scene components. Lastly, ensure that you have added #include BountyDashGameMode.h to .cpp include the list. Placing the Floor in the level! We are done making the floor! Compile the code and return to the level editor. We can now place this new floor object in the level! Delete the static mesh that would have replaced our earlier box brushes, and drag and drop the Floor object into the scene. The floor object can be found under the C++ classes folder of the content browser. Select the floor in the level, and ensure that its location is set too (0.0f, 0.0f, and -100.f). This will place the floor just below the player's feet around origin. Also, ensure that the ATargetPoints that we placed earlier are in the right positions above the lanes. With all this in place, you should be able to press play and observe the floor moving underneath the player indefinitely. You will see something similar to this: You will notice that as you move between the lanes by pressing A and D, the player maintains the X position of the target points but nicely travels to the center of each lane. Creating the obstacles The next step for this project is to create the obstacles that will come flying at the player. These obstacles are going to be very simple and contain only a few members and functions. These obstacles are to only used to serve as a blockade for the player, and all of the collision with the obstacles will be handled by the player itself. Use the class wizard to create a new class named Obstacle, and inherit this object from AActor. Once the class has been generated, modify the class definition found in Obstacle.h so that it appears as follows: UCLASS(BlueprintType) class BOUNTYDASH_API AObstacle: public AActor { GENERATED_BODY()         float KillPoint;   public:      // Sets default values for this actor's properties     AObstacle ();       // Called when the game starts or when spawned     virtual void BeginPlay() override;         // Called every frame     virtual void Tick( float DeltaSeconds ) override;     void SetKillPoint(float point); float GetKillPoint();   protected:     UFUNCITON()     virtual void MyOnActorOverlap(AActor* otherActor);       UFUNCTION()     virtual void MyOnActorEndOverlap(AActor* otherActor);     public: UPROPERTY(EditAnywhere, BlueprintReadWrite)     USphereComponent* Collider;       UPROPERTY(EditAnywhere, BlueprintReadWrite)     UStaticMeshComponent* Mesh; }; You will notice that the class has been declared with the BlueprintType specifier! This object is simple enough to justify extension into blueprint, as there is no new learning to be found within this simple object, and we can use blueprint for convenience. For this class, we have added a private member called KillPoint that will be used to determine when AObstacle should destroy itself. We have also added the accessor methods for this private member. You will notice that we have added the MyActorBeginOverlap and MyActorEndOverlap functions that we will provide appropriate delegates, so we can provide custom collision response for this object. The definitions of these functions are not too complicated either. Ensure that you have included #include BountyDashGameMode.h in Obstacle.cpp. Then, we can begin filling out our function definitions; the following is code what we will use for the constructor: AObstacle::AObstacle() { PrimaryActorTick.bCanEverTick = true;   Collider = CreateDefaultSubobject<USphereComponent>(TEXT("Collider")); check(Collider);   RootComponent = Collider; Collider ->SetCollisionProfileName("OverlapAllDynamic");   Mesh = CreateDefaultSubobject<UStaticMeshComponent>(TEXT("Mesh")); check(Mesh); Mesh ->AttachTo(Collider); Mesh ->SetCollisionResponseToAllChannels(ECR_Ignore); KillPoint = -20000.0f;   OnActorBeginOverlap.AddDynamic(this, &AObstacle::MyOnActorOverlap); OnActorBeginOverlap.AddDynamic(this, &AObstacle::MyOnActorEndOverlap); } The only thing of note within this constructor is that again, we set the mesh of this object to ignore the collision response to all the channels; this means that the mesh will not affect collision in any way. We have also initialized kill point with a default value of -20000.0f. Following that we are binding the custom the MyOnActorOverlap and MyOnActorEndOverlap functions to appropriate delegates. The Tick() function of this object is responsible for translating the obstacle during play. Add the following code to the Tick function of AObstacle: void AObstacle::Tick( float DeltaTime ) {     Super::Tick( DeltaTime ); float gameSpeed = GetCustomGameMode<ABountyDashGameMode>(GetWorld())-> GetInvGameSpeed();       AddActorLocalOffset(FVector(gameSpeed, 0.0f, 0.0f));       if (GetActorLocation().X < KillPoint)     {         Destroy();     } } As you can see the tick function will add an offset to AObstacle each frame along the X axis via the AddActorLocalOffset function. The value of the offset is determined by the game speed set in the game mode; again, we are using the template function that we created earlier to get the game mode to call GetInvGameSpeed(). AObstacle is also responsible for its own destruction; upon reaching a maximum bounds defined by killPoint, the AObstacle will destroy itself. The last thing to we need to add is the function definitions for the OnOverlap functions the and KillPoint accessors: void AObstacle::MyOnActorOverlap(AActor* otherActor) {     } void AObstacle::MyOnActorEndOverlap(AActor* otherActor) {   }   void AObstacle::SetKillPoint(float point) {     killPoint = point; }   float AObstacle::GetKillPoint() {     return killPoint; } Now, let's abstract this class into blueprint. Compile the code and go back to the game editor. Within the content folder, create a new blueprint object that inherits form the Obstacle class that we just made, and name it RockObstacleBP. Within this blueprint, we need to make some adjustments. Select the collider component that we created, and expand the shape sections in the Details panel. Change the Sphere radius property to 100.0f. Next, select the mesh component and expand the Static Mesh section. From the provided drop-down menu, choose the SM_Rock mesh. Next, expand the transform section of the Mesh component details panel and match these values:   You should end up with an object that looks similar to this:   Spawning Actors from C++ Despite the obstacles being fairly easy to implement from a C++ standpoint, the complication usually comes from the spawning system that we will be using to create these objects in the game. We will leverage a similar system to the player's movement by basing the spawn locations off of the ATargetPoints that are already in the scene. We can then randomly select a spawn target when we require a new object to spawn. Open the class wizard now, and create a class that inherits from Actor and call it ObstacleSpawner. We inherit from AActor as even though this object does not have a physical presence in the scene, we still require ObstacleSpawner to tick. The first issue we are going to encounter is that our current target points give us a good indication of the Y positon for our spawns, but the X position is centered around the origin. This is undesirable for the obstacle spawn point, as we would like to spawn these objects a fair distance away from the player, so we can do two things. One, obscure the popping of spawning the objects via fog, and two, present the player with enough obstacle information so that they may dodge them at high speeds. This means we are going to require some information from our floor object; we can use the KillPoint and SpawnPoint members of the floor to determine the spawn location and kill the location of the Obstacles. Obstacle Spawner Class definition This will be another fairly simple object. It will require a BeginPlay function, so we may find the floor and all the target points that we require for spawning. We also require a Tick function so that we may process spawning logic on a per frame basis. Thankfully, both of these are provided by default by the class wizard. We have created a protected SpawnObstacle() function, so we may group that functionality together. We are also going to require a few UPRORERTY declared members that can be edited from the Level editor. We need a list of obstacle types to spawn; we can then randomly select one of the types each time we spawn an obstacle. We also require the spawn targets (though, we will be populating these upon beginning the play). Finally, we will need a spawn time that we can set for the interval between obstacles spawning. To accommodate for all of this, navigate to ObstacleSpawner.h now and modify the class definition to match the following: UCLASS() class BOUNTYDASH_API AObstacleSpawner : public AActor {     GENERATED_BODY()     public:     // Sets default values for this actor's properties     AObstacleSpawner();       // Called when the game starts or when spawned     virtual void BeginPlay() override;         // Called every frame     virtual void Tick( float DeltaSeconds ) override;     protected:       void SpawnObstacle();   public:     UPROPERTY(EditAnywhere, BlueprintReadWrite)     TArray<TSubclassof<class AObstacle*>> ObstaclesToSpawn;       UPROPERTY()     TArray<class ATargetPoint*>SpawnTargets;       UPROPERTY(EditAnywhere, BlueprintReadWrite)     float SpawnTimer;       UPROPERTY()     USceneComponent* scene; private:     float KillPoint;     float SpawnPoint;    float TimeSinceLastSpawn; }; I have again used TArrays for our containers of obstacle objects and spawn targets. As you can see, the obstacle list is of type TSubclassof<class AObstacle>>. This means that the objects in TArray will be class types that inherit from AObscatle. This is very useful as not only will we be able to use these array elements for spawn information, but the engine will also filter our search when we will be adding object types to this array from the editor. With these class types, we will be able to spawn objects that inherit from AObject (including blueprints) when required. We have also included a scene object, so we can arbitrarily place AObstacleSpawner in the level somewhere, and we can place two private members that will hold the kill and the spawn point of the objects. The last element is a float timer that will be used to gauge how much time has passed since the last obstacle spawn. Obstacle Spawner function definitions Okay now, we can create the body of the AObstacleSpawner object. Before we do so, ensure to include the list in ObstacleSpawner.cpp as follows: #include "BountyDash.h" #include "BountyDashGameMode.h" #include "Engine/TargetPoint.h" #include “Floor.h” #include “Obstacle.h” #include "ObstacleSpawner.h"   Following this we have a very simple constructor that establishes the root scene component: // Sets default values AObstacleSpawner::AObstacleSpawner() { // Set this actor to call Tick() every frame.  You can turn this off to improve performance if you don't need it. PrimaryActorTick.bCanEverTick = true;   Scene = CreateDefaultSubobject<USceneComponent>(TEXT("Root")); check(Scene); RootComponent = scene;   SpawnTimer = 1.5f; } Following the constructor, we have BeginPlay(). Inside this function, we are going to do a few things. Firstly, we are simply performing the same in the level object retrieval that we executed in ABountyDashCarhacter to get the location of ATargetPoints. However, this object also requires information from the floor object in the level. We are also going to get the floor object the same way we did with the ATargetPoints by utilizing TActorIterators. We will then get the required kill and spawn the point information. We will also set TimeSinceLastSpawn to SpawnTimer so that we begin spawning objects instantaneously: // Called when the game starts or when spawned void AObstacleSpawner::BeginPlay() {     Super::BeginPlay();   for(TActorIterator<ATargetPoint> TargetIter(GetWorld()); TargetIter;     ++TargetIter)     {         SpawnTargets.Add(*TargetIter);     }   for (TActorIterator<AFloor> FloorIter(GetWorld()); FloorIter;         ++FloorIter)     {         if (FloorIter->GetWorld() == GetWorld())         {             KillPoint = FloorIter->GetKillPoint();             SpawnPoint = FloorIter->GetSpawnPoint();         }     }     TimeSinceLastSpawn = SpawnTimer; } The next function we will understand in detail is Tick(), which is responsible for the bulk of the AObstacleSpawner functionality. Within this function, we need to check if we require a new object to be spawned based off of the amount of time that has passed since we last spawned an object. Add the following code to AObstacleSpawner::Tick() underneath Super::Tick(): TimeSinceLastSpawn += DeltaTime;   float trueSpawnTime = spawnTime / (float)GetCustomGameMode <ABountyDashGameMode>(GetWorld())->GetGameLevel();   if (TimeSinceLastSpawn > trueSpawnTime) {     timeSinceLastSpawn = 0.0f;     SpawnObstacle (); } Here, we are accumulating the delta time in TimeSinceLastSpawn, so we may gauge how much real time has passed since the last obstacle was spawned. We then calculate the trueSpawnTime of the AObstacleSpawner. This is based off of a base SpawnTime, which is divided by the current game level retrieved from the game mode via the GetCustomGamMode() template function. This means that as the game level increases and the obstacles begin to move faster, the obstacle spawner will also spawn objects at a faster rate. If the accumulated timeSinceLastSpawn is greater than the calculated trueSpawnTime, we need to call SpawnObject() and reset the timeSinceLastSpawn timer to 0.0f. Getting information from components in C++ Now, we need to write the spawn function. This spawn function is going to have to retrieve some information from the components of the object that is being spawned. As we have allowed our AObstacle class to be extended into blueprint, we have also exposed the object to a level of versatility that we must compensate for in the codebase. With the ability to customize the mesh and bounds of the Sphere Collider that makes up any given obstacle, we must be sure to spawn the obstacle in the right place regardless of size! To do this, we are going to need to obtain information form the components contained within the spawned AObstacle class. This can be done via GetComponentByClass(). It will take the UClass* of the component you wish to retrieve, and it will return a handle to the component if it has been found. We can then cast this handle to the appropriate type and retrieve the information that we require! Let's begin detailing the spawn function; add the following code to ObstacleSpawner.cpp: void AObstacleSpawner::SpawnObstacle() { if (SpawnTargets.Num() > 0 && ObstaclesToSpawn.Num() > 0) { short Spawner = Fmath::Rand() % SpawnTargets.Num();     short Obstical = Fmath::Rand() % ObstaclesToSpawn.Num();     float CapsuleOffset = 0.0f; Here, we ensure that both of the arrays have been populated with at least one valid member. We then generate the random lookup integers that we will use to access the SpawnTargets and obstacleToSpawn arrays. This means that every time we spawn an object, both the lane spawned in and the type of the object will be randomized. We do this by generating a random value with FMath::Rand(), and then we find the remainder of this number divided by the number of elements in the corresponding array. The result will be a random number that exists between zero and the number of objects in either array minus one which is perfect for our needs. Continue by adding the following code: FActorSpawnParameters SpawnInfo;   FTransform myTrans = SpawnTargets[Spawner]->GetTransform(); myTrans.SetLocation(FVector(SpawnPoint, myTrans.GetLocation().Y, myTrans.GetLocation().Z)); Here, we are using a struct called FActorSpawnParameters. The default values of this struct are fine for our purposes. We will soon be parsing this struct to a function in our world context. After this, we create a transform that we will be providing to the world context as well. The transform of the spawner will suffice apart from the X component of the location. We need to adjust this so that the X value of the spawn transform matches the spawn point that we retrieved from the floor. We do this by setting the X component of the spawn transforms location to be the spawnPoint value that we received earlier, and w make sure that the other components of the location vector to be the Y and Z components of the current location. The next thing we must do is actually spawn the object! We are going to utilize a template function called SpawnActor() that can be called form the UWorld* handle returned by GetWorld(). This function will spawn an object of a specified type in the game world at a specified location. The type of the object is determined by passing the UClass* handle that holds the object type we wish to spawn. The transform and spawn parameters of the object are also determined by the corresponding input parameters of SpawnActor(). The template type of the function will dictate the type of object that is spawned and the handle that is returned from the function. In our case, we require AObstacle to be spawned. Add the following code to the SpawnObstacle function: AObstacle* newObs = GetWorld()-> SpawnActor<AObstacle>(obstacleToSpawn[Obstical, myTrans, SpawnInfo);   if(newObs) { newObs->SetKillPoint(KillPoint); As you can see we are using SpawnActor() with a template type of AObstacle. We use the random look up integer we generated before to retrieve the class type from the obstacleToSpawn array. We also provide the transform and spawn parameters we created earlier to SpawnActor(). If the new AObstacle was created successfully we then save the return of this function into an AObstacle handle that we will use to set the kill point of the obstacle via SetKillPoint(). We must now adjust the height of this object. The object will more than likely spawn in the ground in its current state. We need to get access to the sphere component of the obstacle so we may get the radius of this capsule and adjust the positon of the obstacle so it sits above the ground. We can use the capsule as a reliable resource as it is the root component of the Obstacle thus we can move the Obstacle entirely out of the ground if we assume the base of the sphere will line up with the base of the mesh. Add the following code to the SpawnObstacle() function: USphereComponent* obsSphere = Cast<USphereComponent> (newObs->GetComponentByClass(USphereComponent::StaticClass()));   if (obsSphere) { newObs->AddActorLocalOffset(FVector(0.0f, 0.0f, obsSphere-> GetUnscaledSphereRadius())); } }//<-- Closing if(newObs) }//<-- Closing if(SpawnTargets.Num() > 0 && obstacleToSpawn.Num() > 0) Here, we are getting the sphere component out of the newObs handle that we obtained from SpawnActor() via GetComponentByClass(), which was mentioned previously. We pass the class type of USphereComponent via the static function called StaticClass() to the function. This will return a valid handle if newObs does indeed contain USphereComponent (which we know it does). We then cast the result of this function to USphereComponent*, and save it in the obsSphere handle. We ensure that this handle is valid; if it is, we can then offset the actor that we just spawned on the Z axis by the unscaled radius of the sphere component. This will result in all the obstacles spawned be in line with the top of the floor! Ensuring the obstacle spawner works Okay, now is the time to bring the obstacle spawner into the scene. Be sure to compile the code, then navigate to the C++ classes folder of the content browser. From here, drag and drop ObstacleSpawner into the scene. Select the new ObstacleSpawner via the World Outlier, and address the Details panel. You will see the exposed members under the ObstacleSpawner section like so: Now, to add RockObstacleBP that we made earlier to the ObstacleToSpawn array, press the small white plus next to the property in the Details panel; this will add an element to TArray that you will then be able to customize. Select the drop-down menu that currently says None. Within this menu, search for RockObstacleBP and select it. If you wish to create and add more obstacle types to this array, feel free to do so. We do not need to add any members to the Spawn Targets property, as this will happen automatically. Now, press Play and behold a legion of moving rocks. Summary This article gives an overview about various development tricks associated with Unreal Engine 4. Resources for Article: Further resources on this subject: Special Effects [article] Bang Bang – Let's Make It Explode [article] The Game World [article]
Read more
  • 0
  • 0
  • 46434

article-image-adding-media-our-site
Packt
21 Jun 2016
19 min read
Save for later

Adding Media to Our Site

Packt
21 Jun 2016
19 min read
In this article by Neeraj Kumar et al, authors of the book, Drupal 8 Development Beginner's Guide - Second Edtion, explains a text-only site is not going to hold the interest of visitors; a site needs some pizzazz and some spice! One way to add some pizzazz to your site is by adding some multimedia content, such as images, video, audio, and so on. But, we don't just want to add a few images here and there; in fact, we want an immersive and compelling multimedia experience that is easy to manage, configure, and extend. The File entity (https://drupal.org/project/file_entity) module for Drupal 8 will enable us to manage files very easily. In this article, we will discover how to integrate the File entity module to add images to our d8dev site, and will explore compelling ways to present images to users. This will include taking a look at the integration of a lightbox-type UI element for displaying the File-entity-module-managed images, and learning how we can create custom image styles through UI and code. The following topics will be covered in this article: The File entity module for Drupal 8 Adding a Recipe image field to your content types Code example—image styles for Drupal 8 Displaying recipe images in a lightbox popup Working with Drupal issue queues (For more resources related to this topic, see here.) Introduction to the File entity module As per the module page at https://www.drupal.org/project/file_entity: File entity provides interfaces for managing files. It also extends the core file entity, allowing files to be fieldable, grouped into types, viewed (using display modes) and formatted using field formatters. File entity integrates with a number of modules, exposing files to Views, Entity API, Token and more. In our case, we need this module to easily edit image properties such as Title text and Alt text. So these properties will be used in the colorbox popup to display them as captions. Working with dev versions of modules There are times when you come across a module that introduces some major new features and is fairly stable, but not quite ready for use on a live/production website, and is therefore available only as a dev version. This is a perfect opportunity to provide a valuable contribution to the Drupal community. Just by installing and using a dev version of a module (in your local development environment, of course), you are providing valuable testing for the module maintainers. Of course, you should enter an issue in the project's issue queue if you discover any bugs or would like to request any additional features. Also, using a dev version of a module presents you with the opportunity to take on some custom Drupal development. However, it is important that you remember that a module is released as a dev version for a reason, and it is most likely not stable enough to be deployed on a public-facing site. Our use of the File entity module in this article is a good example of working with the dev version of a module. One thing to note: Drush will download official and dev module releases. But at this point in time, there is no official port for the File entity module in Drupal, so we will use the unofficial one, which lives on GitHub (https://github.com/drupal-media/file_entity). In the next step, we will be downloading the dev release with GitHub. Time for action – installing a dev version of the File entity module In Drupal, we use Drush to download and enable any module/theme, but there is no official port yet for the file entity module in Drupal, so we can use the unofficial one, which lives on GitHub at https://github.com/drupal-media/file_entity: Open the Terminal (Mac OS X) or Command Prompt (Windows) application, and go to the root directory of your d8dev site. Go inside the modules folder and download the File entity module from GitHub. We use the git command to download this module: $ git clone https://github.com/drupal-media/file_entity. Another way is to download a .zip file from https://github.com/drupal-media/file_entity and extract it in the modules folder: Next, on the Extend page (admin/modules), enable the File entity module. What just happened? We enabled the File entity module, and learned how to download and install with GitHub. A new recipe for our site In this article, we are going to create a new recipe: Thai Basil Chicken. If you would like to have more real content to use as an example, and feel free to try the recipe out! Name: Thai Basil Chicken Description: A spicy, flavorful version of one of my favorite Thai dishes RecipeYield : Four servings PrepTime: 25 minutes CookTime: 20 minutes Ingredients : One pound boneless chicken breasts Two tablespoons of olive oil Four garlic cloves, minced Three tablespoons of soy sauce Two tablespoons of fish sauce Two large sweet onions, sliced Five cloves of garlic One yellow bell pepper One green bell pepper Four to eight Thai peppers (depending on the level of hotness you want) One-third cup of dark brown sugar dissolved in one cup of hot water One cup of fresh basil leaves Two cups of Jasmin rice Instructions: Prepare the Jasmine rice according to the directions. Heat the olive oil in a large frying pan over medium heat for two minutes. Add the chicken to the pan and then pour on soy sauce. Cook the chicken until there is no visible pinkness—approximately 8 to 10 minutes. Reduce heat to medium low. Add the garlic and fish sauce, and simmer for 3 minutes. Next, add the Thai chilies, onion, and bell pepper and stir to combine. Simmer for 2 minutes. Add the brown sugar and water mixture. Stir to mix, and then cover. Simmer for 5 minutes. Uncover, add basil, and stir to combine. Serve over rice. Time for action – adding a Recipe images field to our Recipe content type We will use the manage fields administrative page to add a Media field to our d8dev Recipe content type: Open up the d8dev site in your favorite browser, click on the Structure link in the Admin toolbar, and then click on the Content types link. Next, on the Content types administrative page, click on the Manage fields link for your Recipe content type: Now, on the Manage fields administrative page, click on the Add field link. On the next screen, select Image from the Add a new field dropdown and Label as Recipe images. Click on the Save field settings button. Next, on the Field settings page, select Unlimited as the allowed number of values. Click on the Save field settings button. On the next screen, leave all settings as they are and click on the Save settings button. Next, on the Manage form display page, select widget Editable file for the Recipe images field and click on the Save button. Now, on the Manage display page, for the Recipe images field, select Hidden as the label. Click on the settings icon. Then select Medium (220*220) as the image style, and click on the Update button. At the bottom, click on the Save button: Let's add some Recipe images to a recipe. Click on the Content link in the menu bar, and then click on Add content and Recipe. On the next screen, fill in the title as Thai Basil Chicken and other fields respectively as mentioned in the preceding recipe details. Now, scroll down to the new Recipe images field that you have added. Click on the Add a new file button or drag and drop images that you want to upload. Then click on the Save and Publish button: Reload your Thai Basil Chicken recipe page, and you should see something similar to the following: All the images are stacked on top of each other. So, we will add the following CSS just under the style for field--name-field-recipe-images and field--type-recipe-images in the /modules/d8dev/styles/d8dev.css file, to lay out the Recipe images in more of a grid: .node .field--type-recipe-images { float: none !important; } .field--name-field-recipe-images .field__item { display: inline-flex; padding: 6px; } Now we will load this d8dev.css file to affect this grid style. In Drupal 8, loading a CSS file has a process: Save the CSS to a file. Define a library, which can contain the CSS file. Attach the library to a render array in a hook. So, we have already saved a CSS file called d8dev.css under the styles folder; now we will define a library. To define one or more (asset) libraries, add a *.libraries.yml file to your theme folder. Our module is named d8dev, and then the filename should be d8dev.libraries.yml. Each library in the file is an entry detailing CSS, like this: d8dev: version: 1.x css: theme: styles/d8dev.css: {} Now, we define the hook_page_attachments() function to load the CSS file. Add the following code inside the d8dev.module file. Use this hook when you want to conditionally add attachments to a page: /** * Implements hook_page_attachments(). */ function d8dev_page_attachments(array &$attachments) { $attachments['#attached']['library'][] = 'd8dev/d8dev'; } Now, we will need to clear the cache for our d8dev site by going to Configuration, clicking on the Performance link, and then clicking on the Clear all caches button. Reload your Thai Basil Chicken recipe page, and you should see something similar to the following: What just happened? We added and configured a media-based field for our Recipe content type. We updated the d8dev module with custom CSS code to lay out the Recipe images in more of a grid format. And also we looked at how to attach a CSS file through a module. Creating a custom image style Before we configure a colorbox feature, we are going to create a custom image style to use when we add them in colorbox content preview settings. Image styles for Drupal 8 are part of the core Image module. The core image module provides three default image styles—thumbnail, medium, and large—as seen in the following Image style configuration page: Now, we are going to add a fifth custom image style, an image style that will resize our images somewhere between the 100 x 75 thumbnail style and the 220 x 165 medium style. We will walkthrough the process of creating an image style through the Image style administrative page, and also walkthrough the process of programmatically creating an image style. Time for action – adding a custom image style through the image style administrative page First, we will use the Image style administrative page (admin/config/media/image-styles) to create a custom image style: Open the d8dev site in your favorite browser, click on the Configuration link in the Admin toolbar, and click on the Image styles link under the Media section. Once the Image styles administrative page has loaded, click on the Add style link. Next, enter small for the Image style name of your custom image style, and click on the Create new style button: Now, we will add the one and only effect for our custom image style by selecting Scale from the EFFECT options and then clicking on the Add button. On the Add Scale effect page, enter 160 for the width and 120 for the height. Leave the Allow Upscaling checkbox unchecked, and click on the Add effect button: Finally, just click on the Update style button on the Edit small style administrative page, and we are done. We now have a new custom small image style that we will be able to use to resize images for our site: What just happened? We learned how easy it is to add a custom image style with the administrative UI. Now, we are going to see how to add a custom image style by writing some code. The advantage of having code-based custom image styles is that it will allow us to utilize a source code repository, such as Git, to manage and deploy our custom image styles between different environments. For example, it would allow us to use Git to promote image styles from our development environment to a live production website. Otherwise, the manual configuration that we just did would have to be repeated for every environment. Time for action – creating a programmatic custom image style Now, we will see how we can add a custom image style with code: The first thing we need to do is delete the small image style that we just created. So, open your d8dev site in your favorite browser, click on the Configuration link in the Admin toolbar, and then click on the Image styles link under the Media section. Once the Image styles administrative page has loaded, click on the delete link for the small image style that we just added. Next, on the Optionally select a style before deleting small page, leave the default value for the Replacement style select list as No replacement, just delete, and click on the Delete button: In Drupal 8, image styles have been converted from an array to an object that extends ConfigEntity. All image styles provided by modules need to be defined as YAML configuration files in the config/install folder of each module. Suppose our module is located at modules/d8dev. Create a file called modules/d8dev/config/install/image.style.small.yml with the following content: uuid: b97a0bd7-4833-4d4a-ae05-5d4da0503041 langcode: en status: true dependencies: { } name: small label: small effects: c76016aa-3c8b-495a-9e31-4923f1e4be54: uuid: c76016aa-3c8b-495a-9e31-4923f1e4be54 id: image_scale weight: 1 data: width: 160 height: 120 upscale: false We need to use a UUID generator to assign unique IDs to image style effects. Do not copy/paste UUIDs from other pieces of code or from other image styles! The name of our custom style is small, is provided as the name and label as same. For each effect that we want to add to our image style, we will specify the effect we want to use as the name key, and then pass in values as the settings for the effect. In the case of the image_scale effect that we are using here, we pass in the width, height, and upscale settings. Finally, the value for the weight key allows us to specify the order the effects should be processed in, and although it is not very useful when there is only one effect, it becomes important when there are multiple effects. Now, we will need to uninstall and install our d8dev module by going to the Extend page. On the next screen click on the Uninstall tab, check the d8dev checkbox and click on the Uninstall button. Now, click on the List tab, check d8dev, and click on the Install button. Then, go back to the Image styles administrative page and you will see our programmatically created small image style. What just happened? We created a custom image style with some custom code. We then configured our Recipe content type to use our custom image style for images added to the Recipe images field. Integrating the Colorbox and File entity modules The File entity module provides interfaces for managing files. For images, we will be able to edit Title text, Alt text, and Filenames easily. However, the images are taking up quite a bit of room. Let's create a pop-up lightbox gallery and show images in a popup. When someone clicks on an image, a lightbox will pop up and allow the user to cycle through larger versions of all associated images. Time for action – installing the Colorbox module Before we can display Recipe images in a Colorbox, we need to download and enable the module: Open the Mac OS X Terminal or Windows Command Prompt, and change to the d8dev directory. Next, use Drush to download and enable the current dev release of the Colorbox module (http://drupal.org/project/colorbox): $ drush dl colorbox-8.x-1.x-dev Project colorbox (8.x-1.x-dev) downloaded to /var/www/html/d8dev/modules/colorbox. [success] $ drushencolorbox The following extensions will be enabled: colorbox Do you really want to continue? (y/n): y colorbox was enabled successfully. [ok] The Colorbox module depends on the Colorbox jQuery plugin available at https://github.com/jackmoore/colorbox. The Colorbox module includes a Drush task that will download the required jQuery plugin at the /libraries directory: $drushcolorbox-plugin Colorbox plugin has been installed in libraries [success] Next, we will look into the Colorbox display formatter. Click on the Structure link in the Admin toolbar, then click on the Content types link, and finally click on the manage display link for your Recipe content type under the Operations dropdown: Next, click on the FORMAT select list for the Recipe images field, and you will see an option for Colorbox, Select as Colorbox then you will see the settings change. Then, click on the settings icon: Now, you will see the settings for Colorbox. Select Content image style as small and Content image style for first image as small in the dropdown, and use the default settings for other options. Click on the Update button and next on the Save button at the bottom: Reload our Thai Basil Chicken recipe page, and you should see something similar to the following (with the new image style, small): Now, click on any image and then you will see the image loaded in the colorbox popup: We have learned more about images for colorbox, but colorbox also supports videos. Another way to add some spice to our site is by adding videos. So there are several modules available to work with colorbox for videos. The Video Embed Field module creates a simple field type that allows you to embed videos from YouTube and Vimeo and show their thumbnail previews simply by entering the video's URL. So you can try this module to add some pizzazz to your site! What just happened? We installed the Colorbox module and enabled it for the Recipe images field on our custom Recipe content type. Now, we can easily add images to our d8dev content with the Colorbox pop-up feature. Working with Drupal issue queues Drupal has its own issue queue for working with a team of developers around the world. If you need help for a specific project, core, module, or a theme related, you should go to the issue queue, where the maintainers, users, and followers of the module/theme communicate. The issue page provides a filter option, where you can search for specific issues based on Project, Assigned, Submitted by, Followers, Status, Priority, Category, and so on. We can find issues at https://www.drupal.org/project/issues/colorbox. Here, replace colorbox with the specific module name. For more information, see https://www.drupal.org/issue-queue. In our case, we have one issue with the colorbox module. Captions are working for the Automatic and Content title properties, but are not working for the Alt text and Title text properties. To check this issue, go to Structure | Content types and click on Manage display. On the next screen, click on the settings icon for the Recipe images field. Now select the Caption option as Title text or Alt text and click on the Update button. Finally, click on the Save button. Reload the Thai Basil Chicken recipe page, and click on any image. Then it opens in popup, but we cannot see captions for this. Make sure you have the Title text and Alt text properties updated for Recipe images field for the Thai Basil Chicken recipe. Time for action – creating an issue for the Colorbox module Now, before we go and try to figure out how to fix this functionality for the Colorbox module, let's create an issue: On https://www.drupal.org/project/issues/colorbox, click on the Create a new issue link: On the next screen we will see a form. We will fill in all the required fields: Title, Category as Bug report, Version as 8.x-1.x-dev, Component as Code, and the Issue summary field. Once I submitted my form, an issue was created at https://www.drupal.org/node/2645160. You should see an issue on Drupal (https://www.drupal.org/) like this: Next, the Maintainers of the colorbox module will look into this issue and reply accordingly. Actually, @frjo replied saying "I have never used that module but if someone who does sends in a patch I will take a look at it." He is a contributor to this module, so we will wait for some time and will see if someone can fix this issue by giving a patch or replying with useful comments. In case someone gives the patch, then we have to apply that to the colorbox module. This information is available on Drupal at https://www.drupal.org/patch/apply . What just happened? We understood and created an issue in the Colorbox module's issue queue list. We also looked into what the required fields are and how to fill them to create an issue in the Drupal module queue list form. Summary In this article, we looked at a way to use our d8dev site with multimedia, creating image styles using some custom code, and learned some new ways of interacting with the Drupal developer community. We also worked with the Colorbox module to add images to our d8dev content with the Colorbox pop-up feature. Lastly, we looked into the custom module to work with custom CSS files. Resources for Article: Further resources on this subject: Installing Drupal 8 [article] Drupal 7 Social Networking: Managing Users and Profiles [article] Drupal 8 and Configuration Management [article]
Read more
  • 0
  • 0
  • 33605
article-image-communication-and-network-security
Packt
21 Jun 2016
7 min read
Save for later

Communication and Network Security

Packt
21 Jun 2016
7 min read
In this article by M. L. Srinivasan, the author of the book CISSP in 21 Days, Second Edition, the communication and network security domain deals with the security of voice and data communications through Local area, Wide area, and Remote access networking. Candidates are expected to have knowledge in the areas of secure communications; securing networks; threats, vulnerabilities, attacks, and countermeasures to communication networks; and protocols that are used in remote access. (For more resources related to this topic, see here.) Observe the following diagram. This represents seven layers of the OSI model. This article covers protocols and security in the fourth layer, which is the Transport layer: Transport layer protocols and security The Transport layer does two things. One is to pack the data given out by applications to a format that is suitable for transport over the network, and the other is to unpackthe data received from the network to a format suitable for applications. In this layer, some of the important protocols are Transmission Control Protocol (TCP), User Datagram Protocol (UDP), Stream Control Transmission Protocol (SCTP), Datagram Congestion Control Protocol (DCCP), and Fiber Channel Protocol (FCP). The process of packaging the data packets received from the applications is called encapsulation, and the output of such a process is called a datagram. Similarly, the process of unpacking the datagram received from the network is called decapstulation. When moving from the seventh layer down to the fourth one, when the fourth layer's header is placed on data, it comes as a datagram. When the datagram is encapsulated with the third layer's header, it becomes a packet, the encapsulated packet becomes a frame, and puts on the wire as bits. The following section describes some of the important protocols in this layer along with security concerns and countermeasures. Transmission Control Protocol (TCP) It is a core Internet protocol that provides reliable delivery mechanisms over the Internet. TCP is a connection-oriented protocol. A protocol that guarantees the delivery of datagram (packets) to the destination application by way of a suitable mechanism (for example, a three-way handshake SYN, SYN-ACK, and ACK in TCP) is called a connection-oriented protocol. The reliability of the datagram delivery of such protocol is high due to the acknowledgment part by the receiver. This protocol has two primary functions. The primary function of TCP is the transmission of datagram between applications, and the secondary one is in terms of controls that are necessary for ensuring reliable transmissions. Applications where the delivery needs to be assured such as e-mail, the World Wide Web (WWW), file transfer,and so on use TCP for transmission. Threats, vulnerabilities, attacks, and countermeasures One of the common threats to TCP is a service disruption. A common vulnerability is half-open connections exhausting the server resources. The Denial of Service attacks such as TCP SYN attacks as well as connection hijacking such as IP Spoofing attacks are possible. A half-open connection is a vulnerability in the TCP implementation.TCP uses a three-way handshake to establish or terminate connections. Refer to the following diagram: In a three-way handshake, the client first (workstation) sends a request to the server (for example www.SomeWebsite.com). This is called a SYN request. The server acknowledges the request by sending a SYN-ACK, and in the process, it creates a buffer for this connection. The client does a final acknowledgement by ACK. TCP requires this setup, since the protocol needs to ensure the reliability of the packet delivery. If the client does not send the final ACK, then the connection is called half open. Since the server has created a buffer for that connection,a certain amount of memory or server resource is consumed. If thousands of such half-open connections are created maliciously, then the server resources maybe completely consumed resulting in the Denial-of-Service to legitimate requests. TCP SYN attacks are technically establishing thousands of half-open connections to consume the server resources. There are two actions that an attacker might do. One is that the attacker or malicious software will send thousands of SYN to the server and withheld ACK. This is called SYN flooding. Depending on the capacity of the network bandwidth and the server resources, in a span of time,all the resources will be consumed resulting in the Denial-of-Service. If the source IP was blocked by some means, then the attacker or the malicious software would try to spoof the source IP addresses to continue the attack. This is called SYN spoofing. SYN attacks such as SYN flooding and SYN spoofing can be controlled using SYN cookies with cryptographic hash functions. In this method, the server does not create the connection at the SYN-ACK stage. The server creates a cookie with the computed hash of the source IP address, source port, destination IP, destination port, and some random values based on the algorithm and sends it as SYN-ACK. When the server receives an ACK, it checks the details and creates the connection. A cookie is a piece of information usually in the form of text file sent by the server to a client. Cookies are generally stored in browser disk or client computers, and they are used for purposes such as authentication, session tracking, and management. User Datagram Protocol (UDP) UDP is a connectionless protocol and is similar to TCP. However, UDP does not provide the delivery guarantee of data packets. A protocol that does not guarantee the delivery of datagram (packets) to the destination is called connectionless protocol. In other words, the final acknowledgment is not mandatory in UDP. UDP uses one-way communication. The speed delivery of the datagram by UDP is high. UDP is predominantly used where a loss of intermittent packets is acceptable such as video or audio streaming. Threats, vulnerabilities, attacks, and countermeasures Service disruptions are common threats, and validation weaknesses facilitate such threats. UDP flood attacks cause service disruptions, and controlling UDP packet size acts as a countermeasure to such attacks. Internet Control Message Protocol (ICMP) ICMP is used to discover service availability in network devices, servers ,and so on. ICMP expects response messages from devices or systems to confirm the service availability. Threats, vulnerabilities, attacks, and countermeasures Service disruptions are common threats. Validation weaknesses facilitate such threats. ICMP flood attacks, such as the ping of death, causes service disruptions; and controlling ICMP packet size acts as a countermeasure to such attacks. Pinging is a process of sending the Internet Control Message Protocol (ICMP) ECHO_REQUEST message to servers or hosts to check whether they are up and running. In this process,the server or host on the network responds to a ping request, and such a response is called echo. A ping of death refers to sending large numbers of ICMP packets to the server to crash the system. Other protocols in transport layer Stream Control Transmission Protocol (SCTP): This is a connection-oriented protocol similar to TCP, but it provides facilities such as multi-streaming and multi-homing for better performance and redundancy. It is used in UNIX-like operating systems. Datagram Congestion Control Protocol (DCCP): As the name implies, this is a Transport layer protocol that is used for congestion control. Applications her include the Internet telephony and video/audio streaming over the network. Fiber Channel Protocol (FCP): This protocol is used in high-speed networking. One of the prominent applications here is Storage Area Network (SAN). Storage Area Network (SAN) is a network architecture used to attach remote storage devices, such as tape drives anddisk arrays, to the local server. This facilitates using storage devices as if they are local devices. Summary This article covers protocols and security in thetransport layer, which is the fourth layer. Resources for Article: Further resources on this subject: The GNS3 orchestra [article] CISSP: Vulnerability and Penetration Testing for Access Control [article] CISSP: Security Measures for Access Control [article]
Read more
  • 0
  • 0
  • 20300

article-image-creating-multitenant-applications-azure
Packt
21 Jun 2016
18 min read
Save for later

Creating Multitenant Applications in Azure

Packt
21 Jun 2016
18 min read
This article, written by Roberto Freato and Marco Parenzan, is from the book Mastering Cloud Development using Microsoft Azure by Packt Publishing, and it teaches us how to create multitenant applications in Azure. This book guides you through the many efficient ways of mastering the cloud services and using Microsoft Azure and its services to its maximum capacity. (For more resources related to this topic, see here.) A tenant is a private space for a user or a group of users in an application. A typical way to identify a tenant is by its domain name. If multiple users share a domain name, we say that these users live inside the same tenant. If a group of users use a different reserved domain name, they live in a reserved tenant. From this, we can infer that different names are used to identify different tenants. Different domain names can imply different app instances, but we cannot say the same about deployed resources. Multitenancy is one of the funding principles of Cloud Computing. Developers need to reach economy of scale, which allows every cloud user to scale as needed without paying for overprovisioned resources or suffering for underprovisioned resources. To do this, cloud infrastructure needs to be oversized for a single user and sized for a pool of potential users that share the same group of resources during a certain period of time. Multitenancy is a pattern. Legacy on-premise applications usually tend to be a single-tenant app, shared between users because of the lack of specific DevOps tasks. Provisioning an app for every user can be a costly operation. Cloud environments invite reserving a single tenant for each user (or group of users) to enforce better security policies and to customize tenants for specific users because all DevOps tasks can be automated via management APIs. The cloud invites reserving resource instances for a tenant and deploying a group of tenants on the same resources. In general, this is a new way of handling app deployment. We will now take a look at how to develop an app in this way. Scenario CloudMakers.xyz, a cloud-based development company, decided to develop a personal accountant web application—MyAccountant. Professionals or small companies can register themselves on this app as a single customer and record all of their invoices on it. A single customer represents the tenant; different companies use different tenants. Every tenant needs its own private data to enforce data security, so we will reserve a dedicated database for a single tenant. Access to a single database is not an intensive task because invoice registration will generally occur once daily. Every tenant will have its own domain name to enforce company identity. A new tenant can be created from the company portal application, where new customers register themselves, specifying the tenant name. For sample purposes, without the objective of creating production-quality styling, we use the default ASP.NET MVC templates to style and build up apps and focus on tenant topics. Creating the tenant app A tenant app is an invoice recording application. To brand the tenant, we record tenant name in the app settings inside the web.config file: <add key="TenantName" value="{put_your_tenant_name}" /> To simplify this, we "brand" the application that stores the tenant name in the main layout file where the application name is displayed. The application content is represented by an Invoices page where we record data with a CRUD process. The entry for the Invoices page is in the Navigation bar: <ul class="nav navbar-nav"> <li>@Html.ActionLink("Home", "Index", "Home")</li> <li>@Html.ActionLink("Invoices", "Index", "Invoices")</li> <!-- other code omitted --> First, we need to define a model for the application in the models folder. As we need to store data in an Azure SQL database, we can use entity framework to create the model from an empty code. However, first we use the following code: public class InvoicesModel : DbContext { public InvoicesModel() : base("name=InvoicesModel") { } public virtual DbSet<Invoice> Invoices { get; set; } } As we can see, data will be accessed by a SQL database that is referenced by a connectionString in the web.config file: <add name="InvoicesModel" connectionString="data source=(LocalDb)MSSQLLocalDB;initial catalog=Tenant.Web.Models.InvoicesModel;integrated security=True;MultipleActiveResultSets=True; App=EntityFramework" providerName="System.Data.SqlClient" /></connectionStrings> This model class is just for demo purposes: public class Invoice { public int InvoiceId { get; set; } public int Number { get; set; } public DateTime Date { get; set; } public string Customer { get; set; } public decimal Amount { get; set; } public DateTime DueDate { get; set; } } After this, we try to compile the project to check whether we have not made any mistake. We can now scaffold this model into an MVC controller so that we can have a simple but working app skeleton. Creating the portal app We now need to create the portal app starting from the MVC default template. Its registration workflow is useful for the creation of our tenant registration. In particular, we utilize user registration as the tenant registration. The main information acquires the tenant name and triggers tenant deployment. We need to make two changes on the UI. First, in the RegisterViewModel defined under the Models folder, we add a TenantName property to the AccountViewModels.cs file: public class RegisterViewModel { [Required] [Display(Name = "Tenant Name")] public string TenantName { get; set; } [Required] [EmailAddress] [Display(Name = "Email")] public string Email { get; set; } // other code omitted } In the Register.cshtml view page under ViewsAccount folder, we add an input box: @using (Html.BeginForm("Register", "Account", FormMethod.Post, new { @class = "form-horizontal", role = "form" })) { @Html.AntiForgeryToken() <h4>Create a new account.</h4> <hr /> @Html.ValidationSummary("", new { @class = "text-danger" }) <div class="form-group"> @Html.LabelFor(m => m.TenantName, new { @class = "col-md-2 control-label" }) <div class="col-md-10"> @Html.TextBoxFor(m => m.TenantName, new { @class = "form-control" }) </div> </div> <div class="form-group"> @Html.LabelFor(m => m.Email, new { @class = "col-md-2 control-label" }) <div class="col-md-10"> @Html.TextBoxFor(m => m.Email, new { @class = "form- control" }) </div> </div> <!-- other code omitted --> } Portal application can be great to allow the tenant owner to manage its own tenant, configuring or handling subscription-related tasks to the supplier company. Deploying the portal application Before tenant deployment, we need to deploy the portal itself. MyAccountant is a complex solution made up of multiple Azure services, which needs to be deployed together. First, we need to create an Azure Resource Group to collect all the services: As we already discussed earlier, all data from different tenants, including the portal itself, need to be contained inside distinct Azure SQL databases. Every user will have their own DB as a personal service, which they don't use frequently. It can be a waste of money assigning a reserved quantity of Database Transaction Units (DTUs) to a single database. We can invest on a pool of DTUs that should be shared among all SQL database instances. We begin by creating an SQL Server service from the portal: We need to create a pool of DTUs, which are shared among databases, and configure the pricing tier, which defines the maximum resources allocation per DB: The first database that we need to manually deploy is the portal database, where users will register as tenants. From the MyAccountantPool blade, we can create a new database that will be immediately associated to the pool: From the database blade, we read the connection: We use this connection string to configure the portal app in web.config: <connectionStrings> <add name="DefaultConnection" connectionString="Server=tcp: {portal_db}.database.windows.net,1433;Data Source={portal_db}; .database.windows.net;Initial Catalog=Portal;Persist Security Info=False;User ID={your_username};Password={your_password}; Pooling=False;MultipleActiveResultSets=False;Encrypt=True; TrustServerCertificate=False;Connection Timeout=30;" providerName="System.Data.SqlClient" /> </connectionStrings> We need to create a shared resource for the Web. In this case, we need to create an App Service Plan where we'll host portal and tenants apps. The initial size is not a problem because we can decide to scale up or scale out the solution at any time (in this case, only when application is able to scale out—we don't handle this scenario here). Then, we need to create portal web app that will be associated with the service plan that we just created: The portal can be deployed from Visual Studio to the Azure subscription by right-clicking on the project root in Solution Explorer and selecting Microsoft Azure Web App from Publish. After deployment, the portal is up and running: Deploy the tenant app After tenant registration from the portal, we need to deploy tenant itself, which is made up of the following: The app itself that is considered as the artifact that has to be deployed A web app that runs the app, hosted on the already defined web app plan The Azure SQL database that contains data inside the elastic pool The connection string that connect database to the web app in the web.config file It's a complex activity because it involves many different resources and different kinds of tasks from deployment to configuration. For this purpose, we have the Azure Resource Group project in Visual Studio, where we can configure web app deployment and configuration via Azure Resource Manager templates. This project will be called Tenant.Deploy, and we choose a blank template to do this. In the azuredeploy.json file, we can type a template such as https://github.com/marcoparenzan/CreateMultitenantAppsInAzure/blob/master/Tenant.Deploy/Templates/azuredeploy.json. This template is quite complex. Remember that in the SQL connection string, the username and password should be provided inside the template. We need to reference the Tenant.Web project from the deployment project because we need to deploy tenant artifacts (the project bits). To support deployment, we need to create an Azure Storage Account back to the Azure portal: To understand how it works, we can manually run a deployment directly from Visual Studio by right-clicking on Deployment project from Solution Explorer and selecting Deploy. When we deploy a "sample" tenant, the first dialog will appear. You can connect to the Azure subscription, selecting an existing resource group or creating a new one and the template that describes the deployment composition. The template requires the following parameters from Edit Parameters window: The tenant name The artifact location and SAS token that are automatically added having selected the Azure Storage account from the previous dialog Now, via the included Deploy-AzureResourceGroup.ps1 PowerShell file, Azure resources are deployed. The artifact is copied with AzCopy.exe command to the Azure storage in the Tenant.Web container as a package.zip file and the resource manager starts allocating resources. We can see that tenant is deployed in the following screenshot: Automating the tenant deployment process Now, in order to complete our solution, we need to invoke this deployment process from the portal application during a registration process call in ASP.NET MVC controls. For the purpose of this article, we will just invoke the execution without defining a production-quality deployment process. We can use the following checklist before proceeding: We already have an Azure Resource Manager template that deploys the tenant app customized for the user Deployment is made with a PowerShell script in the Visual Studio deployment project A new registered user for our application does not have an Azure account; we, as service publisher, need to offer a dedicated Azure account with our credentials to deploy the new tenants Azure offers many different ways to interact with an Azure subscription: The classic portal (https://manage.windowsazure.com) The new portal (https://portal.azure.com) The resource portal (https://resources.azure.com) The Azure REST API (https://msdn.microsoft.com/en-us/library/azure/mt420159.aspx) The Azure .NET SDK (https://github.com/Azure/azure-sdk-for-net) and other platforms The Azure CLI open source CLI (https://github.com/Azure/azure-xplat-cli) PowerShell (https://github.com/Azure/azure-powershell) For our needs, this means integrating in our application. We can make these considerations: We need to reuse the same ARM template that we defined We can reuse PowerShell experience, but we can also use our experience as .NET, REST, or other platform developers Authentication is the real discriminator in our solution: the user is not an Azure subscription user and we don't want to make a constraint on this Interacting with Azure REST API, which is the API on which every other solution depends, requires that all invocations need to be authenticated to the Azure Active Directory of the subscription tenant. We already mentioned that the user is not a subscription-authenticated user. Therefore, we need an unattended authentication to our Azure API subscription using a dedicated user for this purpose, encapsulated into a component that is executed by the ASP.NET MVC application in a secure manner to make the tenant deployment. The only environment that offers an out-of-the box solution for our needs (so that we need to write less code) is the Azure Automation Service. Before proceeding, we create a dedicated user for this purpose. Therefore, for security reasons, we can disable a specific user at any time. You should take note of two things: Never use the credentials that you used to register Azure subscription in a production environment! For automation implementation, you need a Azure AD tenant user, so you cannot use Microsoft accounts (Live or Hotmail). To create the user, we need to go to the classic portal, as Azure Active Directory has no equivalent management UI in the new portal. We need to select the tenant directory, that is, the one in the new portal that is visible in the upper right corner. From the classic portal, go to to Azure Active Directory and select the tenant. Click on Add User and type in a new username: Then, go to Administrator Management in the Setting tab of the portal because we need to define the user as a co-administrator in the subscription that we need to use for deployment. Now, with the temporary password, we need to log in manually to https://portal.azure.com/ (open the browser in private mode) with these credentials because we need to change the password, as it is generated as "expired". We are now ready to proceed. Back in the new portal, we select a new Azure Automation account: The first thing that we need to do inside the account is create a credential asset to store the newly-created AAD credentials and use the inside PowerShell scripts to log on in Azure: We can now create a runbook, which is an automation task that can be expressed in different ways: Graphical PowerShell We choose the second one: As we can edit it directly from portal, we can write a PowerShell script for our purposes. This is an adaptation from the one that we used in a standard way in the deployment project inside Visual Studio. The difference is that it is runable inside a runbook and Azure, and it uses already deployed artifacts that are already in the Azure Storage account that we created before. Before proceeding, we need of two IDs from our subscription: The subscription ID The tenant ID These two parameters can be discovered with PowerShell because we can perform Login-AzureRmAccount. Run it through the command line and copy them from the output: The following code is not production quality (needs some optimization) but for demo purposes: param ( $WebhookData, $TenantName ) # If runbook was called from Webhook, WebhookData will not be null. if ($WebhookData -ne $null) { $Body = ConvertFrom-Json -InputObject $WebhookData.RequestBody $TenantName = $Body.TenantName } # Authenticate to Azure resources retrieving the credential asset $Credentials = Get-AutomationPSCredential -Name "myaccountant" $subscriptionId = '{your subscriptionId}' $tenantId = '{your tenantId}' Login-AzureRmAccount -Credential $Credentials -SubscriptionId $subscriptionId -TenantId $tenantId $artifactsLocation = 'https://myaccountant.blob.core.windows.net/ myaccountant-stageartifacts' $ResourceGroupName = 'MyAccountant' # generate a temporary StorageSasToken (in a SecureString form) to give ARM template the access to the templatea artifacts$StorageAccountName = 'myaccountant' $StorageContainer = 'myaccountant-stageartifacts' $StorageAccountKey = (Get-AzureRmStorageAccountKey - ResourceGroupName $ResourceGroupName -Name $StorageAccountName).Key1 $StorageAccountContext = (Get-AzureRmStorageAccount - ResourceGroupName $ResourceGroupName -Name $StorageAccountName).Context $StorageSasToken = New-AzureStorageContainerSASToken -Container $StorageContainer -Context $StorageAccountContext -Permission r -ExpiryTime (Get-Date).AddHours(4) $SecureStorageSasToken = ConvertTo-SecureString $StorageSasToken -AsPlainText -Force #prepare parameters for the template $ParameterObject = New-Object -TypeName Hashtable $ParameterObject['TenantName'] = $TenantName $ParameterObject['_artifactsLocation'] = $artifactsLocation $ParameterObject['_artifactsLocationSasToken'] = $SecureStorageSasToken $deploymentName = 'MyAccountant' + '-' + $TenantName + '-'+ ((Get-Date).ToUniversalTime()).ToString('MMdd-HHmm') $templateLocation = $artifactsLocation + '/Tenant.Deploy/Templates/azuredeploy.json' + $StorageSasToken # execute New-AzureRmResourceGroupDeployment -Name $deploymentName ` -ResourceGroupName $ResourceGroupName ` -TemplateFile $templateLocation ` @ParameterObject ` -Force -Verbose The script is executable in the Test pane, but for production purposes, it needs to be deployed with the Publish button. Now, we need to execute this runbook from outside ASP.NET MVC portal that we already created. We can use Webhooks for this purpose. Webhooks are user-defined HTTP callbacks that are usually triggered by some event. In our case, this is new tenant registration. As they use HTTP, they can be integrated into web services without adding new infrastructure. Runbooks can directly be exposed as a Webhooks that provides HTTP endpoint natively without the need to provide one by ourself. We need to remember some things: Webhooks are public with a shared secret in the URL, so it is "secure" if we don't share it As a shared secret, it expires, so we need to handle Webhook update in the service lifecycle As a shared secret if more users are needed, more Webhooks are needed, as the URL is the only way to recognize who invoked it (again, don't share Webhooks) Copy the URL at this stage as it is not possible to recover it but it needs to be deleted and generate a new one Write it directly in portal web.config app settings: <add key="DeplyNewTenantWebHook" value="https://s2events.azure- automation.net/webhooks?token={your_token}"/> We can set some default parameters if needed, then we can create it. To invoke the Webhook, we use System.Net.HttpClient to create a POST request, placing a JSON object containing TenantName in the body: var requestBody = new { TenantName = model.TenantName }; var httpClient = new HttpClient(); var responseMessage = await httpClient.PostAsync( ConfigurationManager.AppSettings ["DeplyNewTenantWebHook"], new StringContent(JsonConvert.SerializeObject (requestBody)) ); This code is used to customize the registration process in AccountController: public async Task<ActionResult> Register(RegisterViewModel model) { if (ModelState.IsValid) { var user = new ApplicationUser { UserName = model.Email, Email = model.Email }; var result = await UserManager.CreateAsync(user, model.Password); if (result.Succeeded) { await SignInManager.SignInAsync(user, isPersistent:false, rememberBrowser:false); // handle webhook invocation here return RedirectToAction("Index", "Home"); } AddErrors(result); } The responseMessage is again a JSON object that contains JobId that we can use to programmatically access the executed job. Conclusion There are a lot of things that can be done with the set of topics that we covered in this article. These are a few of them: We can write better .NET code for multitenant apps We can authenticate users on with the Azure Active Directory service We can leverage deployment tasks with Azure Service Bus messaging We can create more interaction and feedback during tenant deployment We can learn how to customize ARM templates to deploy other Azure Storage services, such as DocumentDB, Azure Storage, and Azure Search We can handle more PowerShell for the Azure Management tasks Summary Azure can change the way we write our solutions, giving us a set of new patterns and powerful services to develop with. In particular, we learned how to think about multitenant apps to ensure confidentiality to the users. We looked at deploying ASP.NET web apps in app services and providing computing resources with App Services Plans. We looked at how to deploy SQL in Azure SQL databases and computing resources with elastic pool. We declared a deployment script with Azure Resource Manager, Azure Resource Template with Visual Studio cloud deployment projects, and automated ARM PowerShell script execution with Azure Automation and runbooks. The content we looked at in the earlier section will be content for future articles. Code can be found on GitHub at https://github.com/marcoparenzan/CreateMultitenantAppsInAzure. Have fun! Resources for Article: Further resources on this subject: Introduction to Microsoft Azure Cloud Services [article] Microsoft Azure – Developing Web API for Mobile Apps [article] Security in Microsoft Azure [article]
Read more
  • 0
  • 0
  • 28861
Modal Close icon
Modal Close icon