Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-configuring-service-level-agreements-slas
Packt
04 Jan 2017
10 min read
Save for later

Configuring Service Level Agreements (SLAs)

Packt
04 Jan 2017
10 min read
In this article by Steve Buchanan, Steve Beaumont, Anders Asp, Dieter Gasser, and Andreas Baumgarten, the authors of the book Microsoft System Center 2016 Service Manager Cookbook - Second Edition, we will provide recipes to tailor SCSM to your environment. Specifically, we will cover the area of setting up the SLA functions of Service Manager with the following tasks: Creating priority queues Configuring business hours and non-working days Creating SLA metrics Creating SLOs (For more resources related to this topic, see here.) Introduction SLAs in ITIL© and IT Service Management terms allow two parties to set out an agreement on how a specific service will be delivered by one to the other. We will define how it will handle the tracking of Incidents and Service Requests against defined SLAs, how to view the progress of work items against these SLAs, and how to configure SCSM 2016 to alert users when work items are nearing, or have breached, these SLAs. As with most areas of configuration within Service Manager 2016, the organization must define its processes before implementing the Service Manager feature. Creating priority queues This recipe will define a number of queues related to your defined priority for work items such as incidents and service requests. These queues will then be mapped to Service Level Objectives (SLOs). How to do it... The following steps will guide you through the process of creating priority queues: Navigate to the Queues folder in the Library section of the Service Manager 2016 console. Choose Create Queue from the taskbar on the right-hand side of the console. Review the information on the Before You Begin screen of the Create Queue Wizard and click on Next. Enter a queue name that describes the queue. In this example, we will name it Incident SLA Queue – Priority 1 to describe a queue holding Incidents with a priority of 1. Then click on the … selection button next to the Work item type textbox: Use the filter box to scope the choices down to Incident Work Items, choose Incident, and then click OK: Choose your custom Incident Management Pack from the selection list and click on Next. Use the Search box under Available properties to drop the list down to Priority. Tick the box next to Priority and then click on Add: Change the criteria for [Trouble Ticket] Priority, using the drop-down list, to equals and supply the Priority value; in this example, we will give a value of 1. Click on Next: Review the Summary screen of the wizard and then click on Create. You have now successfully created a queue. Click on Close to complete the wizard. Repeat this process for each priority you want to link an SLO to. How it works... Creating a queue allows Service Manager to group similar work items that meet specified criteria, such as all Incidents with a priority of 1. Service Manager can use these queues to scope actions. Using this grouping of work items, we have a target to apply an SLO to. There's more... This recipe requires you to repeat the steps for each priority you would like to apply an SLO to. Repeat each step, but change key information such as the name, priority value, and description to reflect the priority you are creating the queue for. For example, for an Incident Priority 3 queue, make the changes as reflected in the following screenshots: Service Request queues Queues can be created to define any type of grouping of supported process work items in scope for SLA management. For example, you may wish to repeat this recipe for the Service Request process class. Repeat the recipe but select Service Request as the work item type in the wizard, and then choose the defining criteria for the queue related to the Service Request class: You can also use this recipe, but instead of defining the criteria for the queue based on priority, you could choose the category of the incident, say, Hardware: Further queue types If the incident class was extended to capture whether the affected user was a VIP, you would be able to define a VIP queue and give those work items matching that criteria a different resolution time SLA. Configuring business hours and non-working days This recipe will define the hours that your business offers IT services, which allows calculation of resolution and response times against SLAs. Getting ready For this recipe, it is required that you have already assessed the business hours that your IT services will offer to your organization, and that you have custom management packs in place to store your queue customizations. How to do it... The following steps will guide you through the process of configuring business hours and non-working days within Service Manager: Under Administration, expand Service Level Management and then click on Calendar. Under Tasks on the right-hand side of the screen, click on Create Calendar. Give the calendar a meaningful name; in this example, we have used Core Business Hours: Choose the relevant time zone. Place a check mark against all the days for which you offer services. Under each working day, enter a start time and an end time in the 00:00:00 format, for example, 8 am should be entered as 08:00:00: You can also specify the non-working days using the Holidays section; under the Holidays pane, click on Add. In the Add Holiday window that opens enter a name for the Holiday, for example, New Year’s Day. Either manually enter the date in the format relevant for your regional settings (for example, for the United Kingdom regional settings, use DD/MM/YYYY) or use the visual calendar by clicking on the button to the right of the date entry textbox: Click on OK for each holiday. Once all holidays have been added, click on OK to close the Create/Edit Calendar window. How it works... When you specify the business hours and non-working days, Service Manager will take these into consideration when calculating SLA metrics, such as resolution time and first response time for all work items that are affected by the calendar. There's more... A calendar on its own has no impact on service levels. The calendar is one part of the SLO configuration. Adding holidays in bulk Adding holidays manually can be a very time consuming process. Our co-author Anders Asp has automated the process using PowerShell to import a list of holidays. You can download the script and read about the process on the TechNet Gallery at http://gallery.technet.microsoft.com/Generate-SCSMHolidaysCSVps1-a32722ce. Creating SLA metrics Using SLA metrics in Service Manager, we can define what is measured within an SLA. For this recipe, we will show how to create a metric to measure the resolution time of an Incident. How to do it... The following steps will guide you through the process of creating SLA metrics in Service Manager: Under Administration, expand Service Level Management and then click on Metric. Under Tasks on the right-hand side of the screen, click on Create Metric. Supply a title for the metric. In this example, we will use Resolution Time and a description. Click on the Browse... button next to the class field and use the filter box in the Select a Class window that opens to select Incident. Click on OK. Use the drop-down list for Start Date and choose Created date. Use the drop-down list for End Date and choose Resolved date: Click on OK. How it works... Creating a metric defines what you want Service Manager to track, within your SLA definition. So, when an item falls outside the parameters, you can start a notification and escalation process. Creating SLOs This recipe will show you how to create a SLO, which is used within Service Manager to create the relationships between the queues, service levels, calendars, and metrics. The SLO will define the timings to trigger warnings or breaches of service levels. Getting ready To create an SLO, you will need to have already created the following: Queues that correspond to each service level Metrics to measure differences in the start and end times of an incident A calendar to define business working hours You will also need custom management packs in place to store your SLO customizations. How to do it... The following steps will guide you through the process of creating SLOs within Service Manager: Under Administration, expand Service Level Management and then click on Service Level Objectives. Under Tasks on the right-hand side of the screen, click on Create Service Level Objective. Review the Before You Begin information and then click on Next. Provide a title and description relevant to the Service Level Objective you are creating. For this recipe, we will create an SLO for a Priority 1 Incident, and so we will set this SLO's Title to Incident Resolution Time SLO - Priority 1 with a meaningful description. Click on the Browse... button next to the Class textbox and use the filter box in the Select a Class window that opens to select Incident. Click on OK. Use the drop-down list under the Management pack heading to select your custom management pack for storing SLA related customizations to. If you are planning to use this SLO immediately then leave the Enabled checkbox ticked. Only untick this if you plan to create/stage SLOs before setting up SLA functions: Click on Next. In this recipe, use the queue named Incident SLA Queue – Priority 1: Click on Next. On the Service Level Criteria screen, choose the Calendar that you want to associate this SLO with. Under Metric, use the drop-down list to select the time metric you wish to measure against. Following along with the examples, select the Resolution Time metric. Define the target time period before a breach would occur for this metric by entering a value under target. For our Priority 1 Resolution, enter 4 Hours to define the time period before an incident would change to a breach SLA status. Define the target time period before a warning would occur for this metric by entering a value under Warning threshold. For our Priority 1 Resolution, enter 2 Hours to define the time period before an incident would change to a warning SLA status: Click on Next. Review the Information on the Summary page, and when ready, click on Create. Once the SLO has been created and a successful message is displayed, click on Close. How it works... When you configure a SLO, you're pulling together three components, queues, calendars, and metrics. These three components are defined and illustrated as follows: Queues: Work items this SLO will be applied to Calendar: Days and hours that services are offered on Metric: Items that are measured Summary In this article, we saw how to create priority queues from the Service Manager console. Priority queues are mapped to SLOs, which we also learned how to create. To make our setup more precise, we configured business hours and non-working days. We also looked at the useful and time-saving feature of adding holidays in bulk. SLA metrics are an important tool for analyzing how well or not a business is meeting its SLAs and defining the criteria that are considered when measuring that performance. To this end, we looked at how to create SLA metrics. Resources for Article: Further resources on this subject: Features of Dynamics GP [article] Configuring Endpoint Protection in Configuration Manager [article] Managing Application Configuration [article]
Read more
  • 0
  • 0
  • 3428

article-image-microsoft-cognitive-services
Packt
04 Jan 2017
16 min read
Save for later

Microsoft Cognitive Services

Packt
04 Jan 2017
16 min read
In this article by Leif Henning Larsen, author of the book Learning Microsoft Cognitive Services, we will look into what Microsoft Cognitive Services offer. You will then learn how to utilize one of the APIs by recognizing faces in images. Microsoft Cognitive Services give developers the possibilities of adding AI-like capabilities to their applications. Using a few lines of code, we can take advantage of powerful algorithms that would usually take a lot of time, effort, and hardware to do yourself. (For more resources related to this topic, see here.) Overview of Microsoft Cognitive Services Using Cognitive Services means you have 21 different APIs at your hand. These are in turn separated into 5 top-level domains according to what they do. They are vision, speech, language, knowledge, and search. Let's see more about them in the following sections. Vision APIs under the vision flags allows your apps to understand images and video content. It allows you to retrieve information about faces, feelings, and other visual content. You can stabilize videos and recognize celebrities. You can read text in images and generate thumbnails from videos and images. There are four APIs contained in the vision area, which we will see now. Computer Vision Using the Computer Vision API, you can retrieve actionable information from images. This means you can identify content (such as image format, image size, colors, faces, and more). You can detect whether an image is adult/racy. This API can recognize text in images and extract it to machine-readable words. It can detect celebrities from a variety of areas. Lastly, it can generate storage-efficient thumbnails with smart cropping functionality. Emotion The Emotion API allows you to recognize emotions, both in images and videos. This can allow for more personalized experiences in applications. The emotions that are detected are cross-cultural emotions: anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise. Face We have already seen the very basic example of what the Face API can do. The rest of the API revolves around the same—to detect, identify, organize, and tag faces in photos. Apart from face detection, you can see how likely it is that two faces belong to the same person. You can identify faces and also find similar-looking faces. Video The Video API is about analyzing, editing, and processing videos in your app. If you have a video that is shaky, the API allows you to stabilize it. You can detect and track faces in videos. If a video contains a stationary background, you can detect motion. The API lets you to generate thumbnail summaries for videos, which allows users to see previews or snapshots quickly. Speech Adding one of the Speech APIs allows your application to hear and speak to your users. The APIs can filter noise and identify speakers. They can drive further actions in your application based on the recognized intent. Speech contains three APIs, which we will discuss now. Bing Speech Adding the Bing Speech API to your application allows you to convert speech to text and vice versa. You can convert spoken audio to text either by utilizing a microphone or other sources in real time or by converting audio from files. The API also offer speech intent recognition, which is trained by Language Understanding Intelligent Service to understand the intent. Speaker Recognition The Speaker Recognition API gives your application the ability to know who is talking. Using this API, you can use verify that someone speaking is who they claim to be. You can also determine who an unknown speaker is, based on a group of selected speakers. Custom Recognition To improve speech recognition, you can use the Custom Recognition API. This allows you to fine-tune speech recognition operations for anyone, anywhere. Using this API, the speech recognition model can be tailored to the vocabulary and speaking style of the user. In addition to this, the model can be customized to match the expected environment of the application. Language APIs related to language allow your application to process natural language and learn how to recognize what users want. You can add textual and linguistic analysis to your application as well as natural language understanding. The following five APIs can be found in the Language area. Bing Spell Check Bing Spell Check API allows you to add advanced spell checking to your application. Language Understanding Intelligent Service (LUIS) Language Understanding Intelligent Service, or LUIS, is an API that can help your application understand commands from your users. Using this API, you can create language models that understand intents. Using models from Bing and Cortana, you can make these models recognize common requests and entities (such as places, time, and numbers). You can add conversational intelligence to your applications. Linguistic Analysis Linguistic Analysis API lets you parse complex text to explore the structure of text. Using this API, you can find nouns, verbs, and more from text, which allows your application to understand who is doing what to whom. Text Analysis Text Analysis API will help you in extracting information from text. You can find the sentiment of a text (whether the text is positive or negative). You will be able to detect language, topic, and key phrases used throughout the text. Web Language Model Using the Web Language Model (WebLM) API, you are able to leverage the power of language models trained on web-scale data. You can use this API to predict which words or sequences follow a given sequence or word. Knowledge When talking about Knowledge APIs, we are talking about APIs that allow you to tap into rich knowledge. This may be knowledge from the Web, it may be academia, or it may be your own data. Using these APIs, you will be able to explore different nuances of knowledge. The following four APIs are contained in the Knowledge area. Academic Using the Academic API, you can explore relationships among academic papers, journals, and authors. This API allows you to interpret natural language user query strings, which allows your application to anticipate what the user is typing. It will evaluate the said expression and return academic knowledge entities. Entity Linking Entity Linking is the API you would use to extend knowledge of people, places, and events based on the context. As you may know, a single word may be used differently based on the context. Using this API allows you to recognize and identify each separate entity within a paragraph based on the context. Knowledge Exploration The Knowledge Exploration API will let you add the ability to use interactive search for structured data in your projects. It interprets natural language queries and offers auto-completions to minimize user effort. Based on the query expression received, it will retrieve detailed information about matching objects. Recommendations The Recommendations API allows you to provide personalized product recommendations for your customers. You can use this API to add frequently bought together functionality to your application. Another feature you can add is item-to-item recommendations, which allow customers to see what other customers who likes this also like. This API will also allow you to add recommendations based on the prior activity of the customer. Search Search APIs give you the ability to make your applications more intelligent with the power of Bing. Using these APIs, you can use a single call to access data from billions of web pages, images videos, and news. The following five APIs are in the search domain. Bing Web Search With Bing Web Search, you can search for details in billions of web documents indexed by Bing. All the results can be arranged and ordered according to the layout you specify, and the results are customized to the location of the end user. Bing Image Search Using Bing Image Search API, you can add advanced image and metadata search to your application. Results include URLs to images, thumbnails, and metadata. You will also be able to get machine-generated captions and similar images and more. This API allows you to filter the results based on image type, layout, freshness (how new is the image), and license. Bing Video Search Bing Video Search will allow you to search for videos and returns rich results. The results contain metadata from the videos, static- or motion- based thumbnails, and the video itself. You can add filters to the result based on freshness, video length, resolution, and price. Bing News Search If you add Bing News Search to your application, you can search for news articles. Results can include authoritative image, related news and categories, information on the provider, URL, and more. To be more specific, you can filter news based on topics. Bing Autosuggest Bing Autosuggest API is a small, but powerful one. It will allow your users to search faster with search suggestions, allowing you to connect powerful search to your apps. Detecting faces with the Face API We have seen what the different APIs can do. Now we will test the Face API. We will not be doing a whole lot, but we will see how simple it is to detect faces in images. The steps we need to cover to do this are as follows: Register for a free Face API preview subscription. Add necessary NuGet packages to our project. Add some UI to the test application. Detect faces on command. Head over to https://www.microsoft.com/cognitive-services/en-us/face-api to start the process of registering for a free subscription to the Face API. By clicking on the yellow button, stating Get started for free,you will be taken to a login page. Log in with your Microsoft account, or if you do not have one, register for one. Once logged in, you will need to verify that the Face API Preview has been selected in the list and accept the terms and conditions. With that out of the way, you will be presented with the following: You will need one of the two keys later, when we are accessing the API. In Visual Studio, create a new WPF application. Following the instructions at https://www.codeproject.com/articles/100175/model-view-viewmodel-mvvm-explained, create a base class that implements the INotifyPropertyChanged interface and a class implementing the ICommand interface. The first should be inherited by the ViewModel, the MainViewModel.cs file, while the latter should be used when creating properties to handle button commands. The Face API has a NuGet package, so we need to add that to our project. Head over to NuGet Package Manager for the project we created earlier. In the Browse tab, search for the Microsoft.ProjectOxford.Face package and install the it from Microsoft: As you will notice, another package will also be installed. This is the Newtonsoft.Json package, which is required by the Face API. The next step is to add some UI to our application. We will be adding this in the MainView.xaml file. First, we add a grid and define some rows for the grid: <Grid> <Grid.RowDefinitions> <RowDefinition Height="*" /> <RowDefinition Height="20" /> <RowDefinition Height="30" /> </Grid.RowDefinitions> Three rows are defined. The first is a row where we will have an image. The second is a line for status message, and the last is where we will place some buttons: Next, we add our image element: <Image x_Name="FaceImage" Stretch="Uniform" Source="{Binding ImageSource}" Grid.Row="0" /> We have given it a unique name. By setting the Stretch parameter to Uniform, we ensure that the image keeps its aspect ratio. Further on, we place this element in the first row. Last, we bind the image source to a BitmapImage interface in the ViewModel, which we will look at in a bit. The next row will contain a text block with some status text. The text property will be bound to a string property in the ViewModel: <TextBlock x_Name="StatusTextBlock" Text="{Binding StatusText}" Grid.Row="1" /> The last row will contain one button to browse for an image and one button to be able to detect faces. The command properties of both buttons will be bound to the DelegateCommand properties in the ViewModel: <Button x_Name="BrowseButton" Content="Browse" Height="20" Width="140" HorizontalAlignment="Left" Command="{Binding BrowseButtonCommand}" Margin="5, 0, 0, 5" Grid.Row="2" /> <Button x_Name="DetectFaceButton" Content="Detect face" Height="20" Width="140" HorizontalAlignment="Right" Command="{Binding DetectFaceCommand}" Margin="0, 0, 5, 5" Grid.Row="2"/> With the View in place, make sure that the code compiles and run it. This should present you with the following UI: The last part is to create the binding properties in our ViewModel and make the buttons execute something. Open the MainViewModel.cs file. First, we define two variables: private string _filePath; private IFaceServiceClient _faceServiceClient; The string variable will hold the path to our image, while the IFaceServiceClient variable is to interface the Face API. Next we define two properties: private BitmapImage _imageSource; public BitmapImage ImageSource { get { return _imageSource; } set { _imageSource = value; RaisePropertyChangedEvent("ImageSource"); } } private string _statusText; public string StatusText { get { return _statusText; } set { _statusText = value; RaisePropertyChangedEvent("StatusText"); } } What we have here is a property for the BitmapImage mapped to the Image element in the view. We also have a string property for the status text, mapped to the text block element in the view. As you also may notice, when either of the properties is set, we call the RaisePropertyChangedEvent method. This will ensure that the UI is updated when either of the properties has new values. Next, we define our two DelegateCommand objects and do some initialization through the constructor: public ICommand BrowseButtonCommand { get; private set; } public ICommand DetectFaceCommand { get; private set; } public MainViewModel() { StatusText = "Status: Waiting for image..."; _faceServiceClient = new FaceServiceClient("YOUR_API_KEY_HERE"); BrowseButtonCommand = new DelegateCommand(Browse); DetectFaceCommand = new DelegateCommand(DetectFace, CanDetectFace); } In our constructor, we start off by setting the status text. Next, we create an object of the Face API, which needs to be created with the API key we got earlier. At last, we create the DelegateCommand object for our command properties. Note how the browse command does not specify a predicate. This means it will always be possible to click on the corresponding button. To make this compile, we need to create the functions specified in the DelegateCommand constructors—the Browse, DetectFace, and CanDetectFace functions: private void Browse(object obj) { var openDialog = new Microsoft.Win32.OpenFileDialog(); openDialog.Filter = "JPEG Image(*.jpg)|*.jpg"; bool? result = openDialog.ShowDialog(); if (!(bool)result) return; We start the Browse function by creating an OpenFileDialog object. This dialog is assigned a filter for JPEG images, and in turn it is opened. When the dialog is closed, we check the result. If the dialog was cancelled, we simply stop further execution: _filePath = openDialog.FileName; Uri fileUri = new Uri(_filePath); With the dialog closed, we grab the filename of the file selected and create a new URI from it: BitmapImage image = new BitmapImage(fileUri); image.CacheOption = BitmapCacheOption.None; image.UriSource = fileUri; With the newly created URI, we want to create a new BitmapImage interface. We specify it to use no cache, and we set the URI source the URI we created: ImageSource = image; StatusText = "Status: Image loaded..."; } The last step we take is to assign the bitmap image to our BitmapImage property, so the image is shown in the UI. We also update the status text to let the user know the image has been loaded. Before we move on, it is time to make sure that the code compiles and that you are able to load an image into the View: private bool CanDetectFace(object obj) { return !string.IsNullOrEmpty(ImageSource?.UriSource.ToString()); } The CanDetectFace function checks whether or not the detect faces button should be enabled. In this case, it checks whether our image property actually has a URI. If it does, by extension that means we have an image, and we should be able to detect faces: private async void DetectFace(object obj) { FaceRectangle[] faceRects = await UploadAndDetectFacesAsync(); string textToSpeak = "No faces detected"; if (faceRects.Length == 1) textToSpeak = "1 face detected"; else if (faceRects.Length > 1) textToSpeak = $"{faceRects.Length} faces detected"; Debug.WriteLine(textToSpeak); } Our DetectFace method calls an async method to upload and detect faces. The return value contains an array of FaceRectangles. This array contains the rectangle area for all face positions in the given image. We will look into the function we call in a bit. After the call has finished executing, we print a line with the number of faces to the debug console window: private async Task<FaceRectangle[]> UploadAndDetectFacesAsync() { StatusText = "Status: Detecting faces..."; try { using (Stream imageFileStream = File.OpenRead(_filePath)) { In the UploadAndDetectFacesAsync function, we create a Stream object from the image. This stream will be used as input for the actual call to the Face API service: Face[] faces = await _faceServiceClient.DetectAsync(imageFileStream, true, true, new List<FaceAttributeType>() { FaceAttributeType.Age }); This line is the actual call to the detection endpoint for the Face API. The first parameter is the file stream we created in the previous step. The rest of the parameters are all optional. The second parameter should be true if you want to get a face ID. The next specifies if you want to receive face landmarks or not. The last parameter takes a list of facial attributes you may want to receive. In our case, we want the age parameter to be returned, so we need to specify that. The return type of this function call is an array of faces with all the parameters you have specified: List<double> ages = faces.Select(face => face.FaceAttributes.Age).ToList(); FaceRectangle[] faceRects = faces.Select(face => face.FaceRectangle).ToArray(); StatusText = "Status: Finished detecting faces..."; foreach(var age in ages) { Console.WriteLine(age); } return faceRects; } } The first line in the previous code iterates over all faces and retrieves the approximate age of all faces. This is later printed to the debug console window, in the following foreach loop. The second line iterates over all faces and retrieves the face rectangle with the rectangular location of all faces. This is the data we return to the calling function. Add a catch clause to finish the method. In case an exception is thrown, in our API call, we catch that. You want to show the error message and return an empty FaceRectangle array. With that code in place, you should now be able to run the full example. The end result will look like the following image: The resulting debug console window will print the following text: 1 face detected 23,7 Summary In this article, we looked at what Microsoft Cognitive Services offer. We got a brief description of all the APIs available. From there, we looked into the Face API, where we saw how to detect faces in images. Resources for Article: Further resources on this subject: Auditing and E-discovery [article] The Sales and Purchase Process [article] Manage Security in Excel [article]
Read more
  • 0
  • 0
  • 16238

article-image-tensorflow
Packt
04 Jan 2017
17 min read
Save for later

TensorFlow

Packt
04 Jan 2017
17 min read
In this article by Nicholas McClure, the author of the book TensorFlow Machine Learning Cookbook, we will cover basic recipes in order to understand how TensorFlow works and how to access data for this book and additional resources: How TensorFlow works Declaring tensors Using placeholders and variables Working with matrices Declaring operations (For more resources related to this topic, see here.) Introduction Google's TensorFlow engine has a unique way of solving problems. This unique way allows us to solve machine learning problems very efficiently. We will cover the basic steps to understand how TensorFlow operates. This understanding is essential in understanding recipes for the rest of this book. How TensorFlow works At first, computation in TensorFlow may seem needlessly complicated. But there is a reason for it: because of how TensorFlow treats computation, developing more complicated algorithms is relatively easy. This recipe will talk you through the pseudo code of how a TensorFlow algorithm usually works. Getting ready Currently, TensorFlow is only supported on Mac and Linux distributions. Using TensorFlow on Windows requires the usage of a virtual machine. Throughout this book we will only concern ourselves with the Python library wrapper of TensorFlow. This book will use Python 3.4+ (https://www.python.org) and TensorFlow 0.7 (https://www.tensorflow.org). While TensorFlow can run on the CPU, it runs faster if it runs on the GPU, and it is supported on graphics cards with NVidia Compute Capability 3.0+. To run on a GPU, you will also need to download and install the NVidia Cuda Toolkit (https://developer.nvidia.com/cuda-downloads). Some of the recipes will rely on a current installation of the Python packages Scipy, Numpy, and Scikit-Learn as well. How to do it… Here we will introduce the general flow of TensorFlow algorithms. Most recipes will follow this outline: Import or generate data: All of our machine-learning algorithms will depend on data. In this book we will either generate data or use an outside source of data. Sometimes it is better to rely on generated data because we will want to know the expected outcome. Transform and normalize data: The data is usually not in the correct dimension or type that our TensorFlow algorithms expect. We will have to transform our data before we can use it. Most algorithms also expect normalized data and we will do this here as well. TensorFlow has built in functions that can normalize the data for you as follows: data = tf.nn.batch_norm_with_global_normalization(...) Set algorithm parameters: Our algorithms usually have a set of parameters that we hold constant throughout the procedure. For example, this can be the number of iterations, the learning rate, or other fixed parameters of our choosing. It is considered good form to initialize these together so the reader or user can easily find them, as follows: learning_rate = 0.01 iterations = 1000 Initialize variables and placeholders: TensorFlow depends on us telling it what it can and cannot modify. TensorFlow will modify the variables during optimization to minimize a loss function. To accomplish this, we feed in data through placeholders. We need to initialize both of these, variables and placeholders with size and type, so that TensorFlow knows what to expect. See the following: code: a_var = tf.constant(42) x_input = tf.placeholder(tf.float32, [None, input_size]) y_input = tf.placeholder(tf.fload32, [None, num_classes]) Define the model structure: After we have the data, and have initialized our variables and placeholders, we have to define the model. This is done by building a computational graph. We tell TensorFlow what operations must be done on the variables and placeholders to arrive at our model predictions: y_pred = tf.add(tf.mul(x_input, weight_matrix), b_matrix) Declare the loss functions: After defining the model, we must be able to evaluate the output. This is where we declare the loss function. The loss function is very important as it tells us how far off our predictions are from the actual values: loss = tf.reduce_mean(tf.square(y_actual – y_pred)) Initialize and train the model: Now that we have everything in place, we need to create an instance for our graph, feed in the data through the placeholders and let TensorFlow change the variables to better predict our training data. Here is one way to initialize the computational graph: with tf.Session(graph=graph) as session: ... session.run(...) ... Note that we can also initiate our graph with: session = tf.Session(graph=graph) session.run(…) (Optional) Evaluate the model: Once we have built and trained the model, we should evaluate the model by looking at how well it does with new data through some specified criteria. (Optional) Predict new outcomes: It is also important to know how to make predictions on new, unseen, data. We can do this with all of our models, once we have them trained. How it works… In TensorFlow, we have to setup the data, variables, placeholders, and model before we tell the program to train and change the variables to improve the predictions. TensorFlow accomplishes this through the computational graph. We tell it to minimize a loss function and TensorFlow does this by modifying the variables in the model. TensorFlow knows how to modify the variables because it keeps track of the computations in the model and automatically computes the gradients for every variable. Because of this, we can see how easy it can be to make changes and try different data sources. See also A great place to start is the official Python API Tensorflow documentation: https://www.tensorflow.org/versions/r0.7/api_docs/python/index.html There are also tutorials available: https://www.tensorflow.org/versions/r0.7/tutorials/index.html Declaring tensors Getting ready Tensors are the data structure that TensorFlow operates on in the computational graph. We can declare these tensors as variables or feed them in as placeholders. First we must know how to create tensors. When we create a tensor and declare it to be a variable, TensorFlow creates several graph structures in our computation graph. It is also important to point out that just by creating a tensor, TensorFlow is not adding anything to the computational graph. TensorFlow does this only after creating a variable out of the tensor. See the next section on variables and placeholders for more information. How to do it… Here we will cover the main ways to create tensors in TensorFlow. Fixed tensors: Creating a zero filled tensor. Use the following: zero_tsr = tf.zeros([row_dim, col_dim]) Creating a one filled tensor. Use the following: ones_tsr = tf.ones([row_dim, col_dim]) Creating a constant filled tensor. Use the following: filled_tsr = tf.fill([row_dim, col_dim], 42) Creating a tensor out of an existing constant.Use the following: constant_tsr = tf.constant([1,2,3]) Note that the tf.constant() function can be used to broadcast a value into an array, mimicking the behavior of tf.fill() by writing tf.constant(42, [row_dim, col_dim]) Tensors of similar shape: We can also initialize variables based on the shape of other tensors, as follows: zeros_similar = tf.zeros_like(constant_tsr) ones_similar = tf.ones_like(constant_tsr) Note, that since these tensors depend on prior tensors, we must initialize them in order. Attempting to initialize all the tensors all at once will result in an error. Sequence tensors: Tensorflow allows us to specify tensors that contain defined intervals. The following functions behave very similarly to the range() outputs and numpy's linspace() outputs. See the following function: linear_tsr = tf.linspace(start=0, stop=1, start=3) The resulting tensor is the sequence [0.0, 0.5, 1.0]. Note that this function includes the specified stop value. See the following function integer_seq_tsr = tf.range(start=6, limit=15, delta=3) The result is the sequence [6, 9, 12]. Note that this function does not include the limit value. Random tensors: The following generated random numbers are from a uniform distribution: randunif_tsr = tf.random_uniform([row_dim, col_dim], minval=0, maxval=1) Know that this random uniform distribution draws from the interval that includes the minval but not the maxval ( minval<=x<maxval ). To get a tensor with random draws from a normal distribution, as follows: randnorm_tsr = tf.random_normal([row_dim, col_dim], mean=0.0, stddev=1.0) There are also times when we wish to generate normal random values that are assured within certain bounds. The truncated_normal() function always picks normal values within two standard deviations of the specified mean. See the following: runcnorm_tsr = tf.truncated_normal([row_dim, col_dim], mean=0.0, stddev=1.0) We might also be interested in randomizing entries of arrays. To accomplish this there are two functions that help us, random_shuffle() and random_crop(). See the following: shuffled_output = tf.random_shuffle(input_tensor) cropped_output = tf.random_crop(input_tensor, crop_size) Later on in this book, we will be interested in randomly cropping an image of size (height, width, 3) where there are three color spectrums. To fix a dimension in the cropped_output, you must give it the maximum size in that dimension: cropped_image = tf.random_crop(my_image, [height/2, width/2, 3]) How it works… Once we have decided on how to create the tensors, then we may also create the corresponding variables by wrapping the tensor in the Variable() function, as follows. More on this in the next section: my_var = tf.Variable(tf.zeros([row_dim, col_dim])) There's more… We are not limited to the built in functions, we can convert any numpy array, Python list, or constant to a tensor using the function convert_to_tensor(). Know that this function also accepts tensors as an input in case we wish to generalize a computation inside a function. Using placeholders and variables Getting ready One of the most important distinctions to make with data is whether it is a placeholder or variable. Variables are the parameters of the algorithm and TensorFlow keeps track of how to change these to optimize the algorithm. Placeholders are objects that allow you to feed in data of a specific type and shape or that depend on the results of the computational graph, like the expected outcome of a computation. How to do it… The main way to create a variable is by using the Variable() function, which takes a tensor as an input and outputs a variable. This is the declaration and we still need to initialize the variable. Initializing is what puts the variable with the corresponding methods on the computational graph. Here is an example of creating and initializing a variable: my_var = tf.Variable(tf.zeros([2,3])) sess = tf.Session() initialize_op = tf.initialize_all_variables() sess.run(initialize_op) To see whatthe computational graph looks like after creating and initializing a variable, see the next part in this section, How it works…, Figure 1. Placeholders are just holding the position for data to be fed into the graph. Placeholders get data from a feed_dict argument in the session. To put a placeholder in the graph, we must perform at least one operation on the placeholder. We initialize the graph, declare x to be a placeholder, and define y as the identity operation on x, which just returns x. We then create data to feed into the x placeholder and run the identity operation. It is worth noting that Tensorflow will not return a self-referenced placeholder in the feed dictionary. The code is shown below and the resulting graph is in the next section, How it works…: sess = tf.Session() x = tf.placeholder(tf.float32, shape=[2,2]) y = tf.identity(x) x_vals = np.random.rand(2,2) sess.run(y, feed_dict={x: x_vals}) # Note that sess.run(x, feed_dict={x: x_vals}) will result in a self-referencing error. How it works… The computational graph of initializing a variable as a tensor of zeros is seen in Figure 1to follow: Figure 1: Variable Figure 1: Here we can see what the computational graph looks like in detail with just one variable, initialized to all zeros. The grey shaded region is a very detailed view of the operations and constants involved. The main computational graph with less detail is the smaller graph outside of the grey region in the upper right. For more details on creating and visualizing graphs. Similarly, the computational graph of feeding a numpy array into a placeholder can be seen to follow, in Figure 2: Figure 2: Computational graph of an initialized placeholder Figure 2: Here is the computational graph of a placeholder initialized. The grey shaded region is a very detailed view of the operations and constants involved. The main computational graph with less detail is the smaller graph outside of the grey region in the upper right. There's more… During the run of the computational graph, we have to tell TensorFlow when to initialize the variables we have created. While each variable has an initializer method, the most common way to do this is with the helper function initialize_all_variables(). This function creates an operation in the graph that initializes all the variables we have created, as follows: initializer_op = tf.initialize_all_variables() But if we want to initialize a variable based on the results of initializing another variable, we have to initialize variables in the order we want, as follows: sess = tf.Session() first_var = tf.Variable(tf.zeros([2,3])) sess.run(first_var.initializer) second_var = tf.Variable(tf.zeros_like(first_var)) # Depends on first_var sess.run(second_var.initializer) Working with matrices Getting ready Many algorithms depend on matrix operations. TensorFlow gives us easy-to-use operations to perform such matrix calculations. For all of the following examples, we can create a graph session by running the following code: import tensorflow as tf sess = tf.Session() How to do it… Creating matrices: We can create two-dimensional matrices from numpy arrays or nested lists, as we described in the earlier section on tensors. We can also use the tensor creation functions and specify a two-dimensional shape for functions like zeros(), ones(), truncated_normal(), and so on: Tensorflow also allows us to create a diagonal matrix from a one dimensional array or list with the function diag(), as follows: identity_matrix = tf.diag([1.0, 1.0, 1.0]) # Identity matrix A = tf.truncated_normal([2, 3]) # 2x3 random normal matrix B = tf.fill([2,3], 5.0) # 2x3 constant matrix of 5's C = tf.random_uniform([3,2]) # 3x2 random uniform matrix D = tf.convert_to_tensor(np.array([[1., 2., 3.],[-3., -7., -1.],[0., 5., -2.]])) print(sess.run(identity_matrix)) [[ 1. 0. 0.] [ 0. 1. 0.] [ 0. 0. 1.]] print(sess.run(A)) [[ 0.96751703 0.11397751 -0.3438891 ] [-0.10132604 -0.8432678 0.29810596]] print(sess.run(B)) [[ 5. 5. 5.] [ 5. 5. 5.]] print(sess.run(C)) [[ 0.33184157 0.08907614] [ 0.53189191 0.67605299] [ 0.95889051 0.67061249]] print(sess.run(D)) [[ 1. 2. 3.] [-3. -7. -1.] [ 0. 5. -2.]] Note that if we were to run sess.run(C) again, we would reinitialize the random variables and end up with different random values. Addition and subtraction uses the following function: print(sess.run(A+B)) [[ 4.61596632 5.39771316 4.4325695 ] [ 3.26702736 5.14477345 4.98265553]] print(sess.run(B-B)) [[ 0. 0. 0.] [ 0. 0. 0.]] Multiplication print(sess.run(tf.matmul(B, identity_matrix))) [[ 5. 5. 5.] [ 5. 5. 5.]] Also, the function matmul() has arguments that specify whether or not to transpose the arguments before multiplication or whether each matrix is sparse. Transpose the arguments as follows: print(sess.run(tf.transpose(C))) [[ 0.67124544 0.26766731 0.99068872] [ 0.25006068 0.86560275 0.58411312]] Again, it is worth mentioning the reinitializing that gives us different values than before. Determinant, use the following: print(sess.run(tf.matrix_determinant(D))) -38.0 Inverse: print(sess.run(tf.matrix_inverse(D))) [[-0.5 -0.5 -0.5 ] [ 0.15789474 0.05263158 0.21052632] [ 0.39473684 0.13157895 0.02631579]] Note that the inverse method is based on the Cholesky decomposition if the matrix is symmetric positive definite or the LU decomposition otherwise. Decompositions: Cholesky decomposition, use the following: print(sess.run(tf.cholesky(identity_matrix))) [[ 1. 0. 1.] [ 0. 1. 0.] [ 0. 0. 1.]] Eigenvalues and Eigenvectors, use the following code: print(sess.run(tf.self_adjoint_eig(D)) [[-10.65907521 -0.22750691 2.88658212] [ 0.21749542 0.63250104 -0.74339638] [ 0.84526515 0.2587998 0.46749277] [ -0.4880805 0.73004459 0.47834331]] Note that the function self_adjoint_eig() outputs the eigen values in the first row and the subsequent vectors in the remaining vectors. In mathematics, this is called the eigen decomposition of a matrix. How it works… TensorFlow provides all the tools for us to get started with numerical computations and add such computations to our graphs. This notation might seem quite heavy for simple matrix operations. Remember that we are adding these operations to the graph and telling TensorFlow what tensors to run through those operations. Declaring operations Getting ready Besides the standard arithmetic operations, TensorFlow provides us more operations that we should be aware of and how to use them before proceeding. Again, we can create a graph session by running the following code: import tensorflow as tf sess = tf.Session() How to do it… TensorFlow has the standard operations on tensors, add(), sub(), mul(), and div(). Note that all of these operations in this section will evaluate the inputs element-wise unless specified otherwise. TensorFlow provides some variations of div() and relevant functions. It is worth mentioning that div() returns the same type as the inputs. This means it really returns the floor of the division (akin to Python 2) if the inputs are integers. To return the Python 3 version, which casts integers into floats before dividing and always returns a float, TensorFlow provides the function truediv()shown as follows: print(sess.run(tf.div(3,4))) 0 print(sess.run(tf.truediv(3,4))) 0.75 If we have floats and want integer division, we can use the function floordiv(). Note that this will still return a float, but rounded down to the nearest integer. The function is shown as follows: print(sess.run(tf.floordiv(3.0,4.0))) 0.0 Another important function is mod(). This function returns the remainder after division.It is shown as follows: print(sess.run(tf.mod(22.0, 5.0))) 2.0 The cross product between two tensors is achieved by the cross() function. Remember that the cross product is only defined for two 3-dimensional vectors, so it only accepts two 3-dimensional tensors. The function is shown as follows: print(sess.run(tf.cross([1., 0., 0.], [0., 1., 0.]))) [ 0. 0. 1.0] Here is a compact list of the more common math functions. All of these functions operate element-wise: abs() Absolute value of one input tensor ceil() Ceiling function of one input tensor cos() Cosine function of one input tensor exp() Base e exponential of one input tensor floor() Floor function of one input tensor inv() Multiplicative inverse (1/x) of one input tensor log() Natural logarithm of one input tensor maximum() Element-wise max of two tensors minimum() Element-wise min of two tensors neg() Negative of one input tensor pow() The first tensor raised to the second tensor element-wise round() Rounds one input tensor rsqrt() One over the square root of one tensor sign() Returns -1, 0, or 1, depending on the sign of the tensor sin() Sine function of one input tensor sqrt() Square root of one input tensor square() Square of one input tensor Specialty mathematical functions: There are some special math functions that get used in machine learning that are worth mentioning and TensorFlow has built in functions for them. Again, these functions operate element-wise, unless specified otherwise: digamma() Psi function, the derivative of the lgamma() function erf() Gaussian error function, element-wise, of one tensor erfc() Complimentary error function of one tensor igamma() Lower regularized incomplete gamma function igammac() Upper regularized incomplete gamma function lbeta() Natural logarithm of the absolute value of the beta function lgamma() Natural logarithm of the absolute value of the gamma function squared_difference() Computes the square of the differences between two tensors How it works… It is important to know what functions are available to us to add to our computational graphs. Mostly we will be concerned with the preceding functions. We can also generate many different custom functions as compositions of the preceding, as follows: # Tangent function (tan(pi/4)=1) print(sess.run(tf.div(tf.sin(3.1416/4.), tf.cos(3.1416/4.)))) 1.0 There's more… If we wish to add other operations to our graphs that are not listed here, we must create our own from the preceding functions. Here is an example of an operation not listed above that we can add to our graph: # Define a custom polynomial function def custom_polynomial(value): # Return 3 * x^2 - x + 10 return(tf.sub(3 * tf.square(value), value) + 10) print(sess.run(custom_polynomial(11))) 362 Summary Thus in this article we have implemented some introductory recipes that will help us to learn the basics of TensorFlow. Resources for Article: Further resources on this subject: Data Clustering [article] The TensorFlow Toolbox [article] Implementing Artificial Neural Networks with TensorFlow [article]
Read more
  • 0
  • 0
  • 2713

article-image-notes-field
Packt
03 Jan 2017
7 min read
Save for later

Notes from the field

Packt
03 Jan 2017
7 min read
In this article by Donabel Santos author of the book Tableau 10 Business Intelligence Cookbook would like to offer you perhaps a personal, and maybe a not-so-conventional way to introduce Tableau. I’d like to highlight a few key concepts and tricks that I think would be useful to you as you go along. These are certainly points I highlight on the board whenever I do training on Tableau. If you feel like we are jumping too far ahead, please go ahead and start with the following section Tableau Primer. Come back to this section when you are ready for the tips and tricks. (For more resources related to this topic, see here.) Instead of thinking of Tableau as this software tool that has a steep learning curve, it is useful to think of it as a blank slate. You will draw on it, keep on adding things, removing things until something makes sense or something insightful pops out. After you work with Tableau for a while and get more comfortable with its functionalities, it might even feel like an extension of your brain to some degree. When you get access to data, you might automatically open Tableau to try and understand what’s in that data. Undo is your best friend Do not be afraid to make mistakes, and do not be afraid to explore in Tableau. Do not come in with strict prejudice – for example thinking that you can only use a time series graph when you have a measure and a date field. The best way to learn and explore how powerful Tableau is to try anything and everything. It’s one of the best tools to experiment. If you make a mistake, or if you don’t like what you see, no sweat. Just click on this friendly undo button and you are back to your previous view. If you are more of a shortcut person, it will be Ctrl + Z on a PC or Command + Z on a Mac. It doesn’t change your original data This is another common concern that comes up in my training sessions or whenever I talk to people about Tableau. No, Tableau does not write back to your data source. All the changes you make will be stored in Tableau like creating calculated fields, changing data types, editing aliases will be stored in your Tableau workbook or data source. Drag and drop Tableau is a highly drag and drop software. Although you can use the menu or a right click instead of a drag and drop for the same tasks, dragging and dropping is often faster. It also flows with your train of thought. Look for visual cues Tableau leverages its visual culture in your design area, so when you create views in Tableau, some of the visual cues and icons can help you along the way. A number of the visual cues have been discussed in this section. However, there may be some lesser known (or less noticeable) visual cues: Italicized field names mean they are Tableau-generated fields: Dual axis charts create fused pills. Notice the area when the two pills touch – they’re straight instead of curved: When you zoom in to maps, or when you search for a place, your map gets pinned (or fixed to this place) until you unpin it: Know the difference between blue (discrete) and green (continuous) Knowing the difference between blue and green will take you far in the Tableau world. The data type icons you will find beside your field names in the side bar are colored either blue or green. When you drag fields onto shelves and cards, the pills are also colored blue and green. Simply speaking, blue means discrete and green means continuous. Discrete means individual, separate, countable and finite. Continuous means range, and technically, there is an infinite number of values within this range. What’s more important is how these are manifested in Tableau. A blue discrete field will produce header, and a green continuous field will produce an axis. If dropped onto the Color shelf, for example, a blue discrete field will use individual, finite colors. A green continuous field will use a range (gradient) of colors. Some confusion also arises when we see that, by default, Tableau places numeric fields under Measures and are colored green, and categorical information under Dimensions are colored blue. These won’t always be the case. We can have numeric values that are discrete – for example an Order Number. We can also see non-numerical, discrete fields under Measures. Learn a few key shortcuts Shortcuts are great, but it’s typically faster to work when you know a few of them. Here are some of my favorite shortcuts: Shortcut What it does Right click + Drag Opens the Drop Field menu, which allows you to specify exactly which variation of the field you want to use Double click Adds the field to the view I particularly like this when creating text tables. After you place your first measure in Text, you can add more measures to your text table by double clicking on the succeeding measures Ctrl + Arrow Adjusts the height/width of the rows/columns in the view Ctrl + H Presentation mode You can find the complete list of shortcuts here: http://bit.ly/tableau-shortcuts Unpackage option The .twbx file is a Tableau packaged workbook, which means it packages local files with your Tableau workbook. When you right click a .twbx file in a machine that has Tableau Desktop installed in it, you will see a new option called Unpackage. When you unpack a .twbx file, you will get the .twb file and another folder that contains all the local files that were used in the original workbook: Just keep in mind that data (at least the file-based data sources and extracts) get packaged with your .twbx files. This is an important security and data governance consideration when you are deciding how to share your workbooks with others. Table calculations are calculations on your table. How you structure or lay out your table (or view) will affect your table calculations. Table calculations are highly influenced by: Layout Filters Scope and Direction Let’s say, for example, you are calculating Percent of Total in your view. If you swap the fields in your Rows and Columns, i.e. changing the layout, your numbers will change If you filter some of the products out, your numbers will change If you decide to compute Pane Down instead of Table Across, your numbers will change If you’re looking for the common use cases for table calculations, check out the Tableau article entitled Top 10 Tableau Table Calculations which can be found here: http://bit.ly/top10tablecalcs LODs Rock Many of the tasks that required complex table calculations or data blending have been greatly simplified by LODs (Level of Detail expressions). LODs allow us to have multiple levels of detail within a single view, and this increases the possibilities in Tableau. To learn more about Level of Detail expressions, I encourage you to check out the following: Understanding Level of Detail Expressions: http://bit.ly/UnderstandingLOD Top 15 LOD Expressions: http://bit.ly/top15LOD It is possible …. Another common question that comes up is can I do <this> or is it possible to do <this>. The answer to many of the questions is yes, and many will include calculations and/or parameters. However, not all solutions will be quick and straightforward. Some may require multiple calculated fields, table calculations, LOD expressions, regular expressions, R scripts etc. Summary In this article we have seen the basics of Tableau as this software tool that has a steep learning curve, it is useful to think of it as a blank slate. You will draw on it, keep on adding things, removing things until something makes sense or something insightful pops out. After you work with Tableau for a while and get more comfortable with its functionalities, it might even feel like an extension of your brain to some degree. When you get access to data, you might automatically open Tableau to try and understand what’s in that data. Resources for Article: Further resources on this subject: Say Hi to Tableau [article] Getting Started with Tableau Public [article] R and its Diverse Possibilities [article]
Read more
  • 0
  • 0
  • 3356

article-image-point-point-networks
Packt
03 Jan 2017
6 min read
Save for later

Point-to-Point Networks

Packt
03 Jan 2017
6 min read
In this article by Jan Just Keijser, author of the book OpenVPN Cookbook - Second Edition, we will cover the following recipes: (For more resources related to this topic, see here.) OpenVPN secret keys Using IPv6 OpenVPN secret keys This recipe uses OpenVPN secret keys to secure the VPN tunnel. This shared secret key is used to encrypt the traffic between the client and the server. Getting ready Install OpenVPN 2.3.9 or higher on two computers. Make sure that the computers are connected over a network. For this recipe, the server computer was running CentOS 6 Linux and OpenVPN 2.3.9 and the client was running Windows 7 Pro 64bit and OpenVPN 2.3.10. How to do it... First, we generate a secret key on the server (listener): [root@server]# openvpn --genkey --secret secret.key We transfer this key to the client side over a secure channel (for example, using scp): Next, we launch the server (listening)-side OpenVPN process: [root@server]# openvpn --ifconfig 10.200.0.1 10.200.0.2 --dev tun --secret secret.key Then, we launch the client-side OpenVPN process: [WinClient] C:>"Program FilesOpenVPNbinopenvpn.exe" --ifconfig 10.200.0.2 10.200.0.1 --dev tun --secret secret.key --remote openvpnserver.example.com The connection is established: How it works... The server listens to the incoming connections on the UDP port 1194. The client connects to the server on this port. After the initial handshake, the server configures the first available TUN device with the IP address 10.200.0.1 and it expects the remote end (peer address) to be 10.200.0.2. The client does the opposite. There's more... By default, OpenVPN uses two symmetric keys when setting up a point-to-point connection: A Cipher key to encrypt the contents of the packets being exchanged. An HMAC key to sign packets. When packets arrive that are not signed using the appropriate HMAC key they are dropped immediately. This is the first line of defense against a "Denial of Service" attack. The same set of keys are used on both ends, and both the keys are derived from the file specified using the --secret parameter. An OpenVPN secret key file is formatted as follows: # # 2048 bit OpenVPN static key # -----BEGIN OpenVPN Static key V1----- <16 lines of random bytes> -----END OpenVPN Static key V1----- From the random bytes, the OpenVPN Cipher and HMAC keys are derived. Note that these keys are the same for each session! Using IPv6 In this recipe, we extend the complete site-to-site network to include support for IPv6. Getting ready Install OpenVPN 2.3.9 or higher on two computers. Make sure that the computers are connected over a network. For this recipe, the server computer was running CentOS 6 Linux and OpenVPN 2.3.9 and the client was running Fedora 22 Linux and OpenVPN 2.3.10. We'll use the secret.key file from the OpenVPN Secret keys recipe here. We use the following network layout: How to do it... Create the server configuration file: dev tun proto udp local openvpnserver.example.com lport 1194 remote openvpnclient.example.com rport 1194 secret secret.key 0 ifconfig 10.200.0.1 10.200.0.2 route 192.168.4.0 255.255.255.0 tun-ipv6 ifconfig-ipv6 2001:db8:100::1 2001:db8:100::2 user nobody group nobody # use "group nogroup" on some distros persist-tun persist-key keepalive 10 60 ping-timer-rem verb 3 daemon log-append /tmp/openvpn.log Save it as example1-9-server.conf. On the client side, we create the configuration file: dev tun proto udp local openvpnclient.example.com lport 1194 remote openvpnserver.example.com rport 1194 secret secret.key 1 ifconfig 10.200.0.2 10.200.0.1 route 172.31.32.0 255.255.255.0 tun-ipv6 ifconfig-ipv6 2001:db8:100::2 2001:db8:100::1 user nobody group nobody # use "group nogroup" on some distros persist-tun persist-key keepalive 10 60 ping-timer-rem verb 3 Save it as example1-9-client.conf. We start the tunnel on both ends: [root@server]# openvpn --config example1-9-server.conf And: [root@client]# openvpn --config example1-9-client.conf Now our site-to-site tunnel is established. After the connection comes up, the machines on the LANs behind both end points can be reached over the OpenVPN tunnel. Note that the client OpenVPN session is running in the foreground. Next, we ping the IPv6 address of the server endpoint to verify that IPv6 traffic over the tunnel is working: [client]$ ping6 -c 4 2001:db8:100::1 PING 2001:db8:100::1(2001:db8:100::1) 56 data bytes 64 bytes from 2001:db8:100::1: icmp_seq=1 ttl=64 time=7.43 ms 64 bytes from 2001:db8:100::1: icmp_seq=2 ttl=64 time=7.54 ms 64 bytes from 2001:db8:100::1: icmp_seq=3 ttl=64 time=7.77 ms 64 bytes from 2001:db8:100::1: icmp_seq=4 ttl=64 time=7.42 ms --- 2001:db8:100::1 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3005ms rtt min/avg/max/mdev = 7.425/7.546/7.778/0.177 ms Finally, we abort the client-side session by pressing Ctrl + C. The following screenshot lists the full client-side log: How it works... The following command enables IPv6 support next to the default IPv4 support: tun-ipv6 ifconfig-ipv6 2001:db8:100::2 2001:db8:100::1 Also, in the client configuration, the options daemon and log-append are not present, hence all OpenVPN output is sent to the screen and the process continues running in the foreground. There's more... Log file errors If we take a closer look at the client-side connection output, we see a few error messages after pressing Ctrl + C, most notably the following: RTNETLINK answers: operation not permitted This is a side-effect when using the user nobody option to protect an OpenVPN setup, and it often confuses new users. What happens is this: OpenVPN starts as the root user, opens the appropriate tun device, and sets the right IPv4 and IPv6 addresses on this tun interface. For extra security, OpenVPN then switches to the nobody user, dropping all privileges associated with the user root. When OpenVPN terminates (in our case, by pressing Ctrl + C), it closes access to the tun device and tries to remove the IPv4 and IPv6 addresses assigned to that device. At this point, the error messages appear, as the user nobody is not allowed to perform these operations. Upon termination of the OpenVPN process, the Linux kernel closes the tun device and all configuration settings are removed. In this case, these error messages are harmless, but in general, one should pay close attention to the warning and error messages that are printed by OpenVPN. IPv6-only tunnel With OpenVPN 2.3, it is required to always enable IPv4 support. From OpenVPN 2.4 onward, it is possible to set up an "IPv6-only" connection. Summary In this article, we extended the complete site-to-site network to include support for IPv6 with OpenVPN secret keys. Resources for Article: Further resources on this subject: Introduction to OpenVPN [Article] A quick start – OpenCV fundamentals [Article] Untangle VPN Services [Article]
Read more
  • 0
  • 0
  • 12024

article-image-working-closures
Packt
03 Jan 2017
31 min read
Save for later

Working with Closures

Packt
03 Jan 2017
31 min read
In this article, Jon Hoffman, the author of the book Mastering Swift 3 - Linux, talks about most major programming languages have functionalities similar to what closures offer. Some of these implementations are really hard to use (Objective-C blocks), while others are easy (Java lambdas and C# delegates). I found that the functionality that closures provide is especially useful when developing frameworks. I have also used them extensively when communicating with remote services over a network connection. While blocks in Objective-C are incredibly useful (and I have used them quite a bit), their syntax used to declare a block was absolutely horrible. Luckily, when Apple was developing the Swift language, they made the syntax of closures much easier to use and understand. In this article, we will cover the following topics: An introduction to closures Defining a closure Using a closure Several useful examples of closures How to avoid strong reference cycles with closures (For more resources related to this topic, see here.) An introduction to closures Closures are self-contained blocks of code that can be passed around and used throughout our application. We can think of an Int type as a type that stores an integer, and a String type as a type that stores a string. In this context, a closure can be thought of as a type that contains a block of code. What this means is that we can assign closures to a variable, pass them as arguments to functions, and also return them from functions. Closures have the ability to capture and store references to any variable or constant from the context in which they were defined. This is known as closing over the variables or constants, and the best thing is, for the most part, Swift will handle the memory management for us. The only exception is when we create a strong reference cycle, and we will look at how to resolve this in the Creating strong reference cycles with closures section of this article. Closures in Swift are similar to blocks in Objective-C; however, closures in Swift are a lot easier to use and understand. Let's look at the syntax used to define a closure in Swift: { (parameters) -> return-type in statements } As we can see, the syntax used to create a closure looks very similar to the syntax we use to create functions in Swift, and actually, in Swift, global and nested functions are closures. The biggest difference in the format between closures and functions is the in keyword. The in keyword is used in place of curly brackets to separate the definition of the closure's parameter and return types from the body of the closure. There are many uses for closures, and we will go over a number of them later in this article, but first we need to understand the basics of closures. Let's start by looking at some very basic uses for closures so that we can get a better understanding of what they are, how to define them, and how to use them. Simple closures We will begin by creating a very simple closure that does not accept any arguments and does not return a value. All it does is print Hello World to the console. Let's take a look at the following code: let clos1 = { () -> Void in print("Hello World") } In this example, we create a closure and assign it to the constant clos1. Since there are no parameters defined between the parentheses, this closure will not accept any parameters. Also, the return type is defined as Void; therefore, this closure will not return any value. The body of the closure contains one line that prints Hello World to the console. There are many ways to use closures; in this example, all we want to do is execute it. We execute this closure such as this: clos1() When we execute the closure, we will see that Hello World is printed to the console. At this point, closures may not seem that useful, but as we get further along in this article, we will see how useful and powerful they can be. Let's look at another simple closure example. This closure will accept one string parameter named name, but will still not return a value. Within the body of the closure, we will print out a greeting to the name passed into the closure through the name parameter. Here is the code for this second closure: let clos2 = { (name: String) -> Void in print("Hello (name)") } The big difference between clos2 defined in this example and the previous clos1 closure is that we define a single string parameter between the parentheses. As we can see, we define parameters for closures just like we define parameters for functions. We can execute this closure in the same way in which we executed clos1. The following code shows how this is done: clos2(name: "Jon") This example, when executed, will print the message Hello Jon to the console. Let's look at another way we can use the clos2 closure. Our original definition of closures stated, closures are self-contained blocks of code that can be passed around and used throughout our application code. What this tells us is that we can pass our closure from the context that it was created in to other parts of our code. Let's look at how to pass our clos2 closure into a function. We will define a function that accepts our clos2 closure such as this: func testClosure(handler:(String)->Void) { handler("Dasher") } We define the function just like we would any other function; however, in our parameter list, we define a parameter named handler, and the type defined for the handler parameter is (String)->Void. If we look closely, we can see that the (String)->Void definition of the handler parameter matches the parameter and return types that we defined for clos2 closure. This means that we can pass the clos2 closure into the function. Let's look at how to do this: testClosure(handler: clos2) We call the testClosure() function just like any other function and the closure that is being passed in looks like any other variable. Since the clos2 closure executed in the testClosure() function, we will see the message, Hello Dasher, printed to the console when this code is executed. As we will see a little later in this article, the ability to pass closures to functions is what makes closures so exciting and powerful. As the final piece to the closure puzzle, let's look at how to return a value from a closure. The following example shows this: let clos3 = { (name: String) -> String in return "Hello (name)" } The definition of the clos3 closure looks very similar to how we defined the clos2 closure. The difference is that we changed the Void return type to a String type. Then, in the body of the closure, instead of printing the message to the console, we used the return statement to return the message. We can now execute the clos3 closure just like the previous two closures, or pass the closure to a function like we did with the clos2 closure. The following example shows how to execute clos3 closure: var message = clos3("Buddy") After this line of code is executed, the message variable will contain the Hello Buddy string. The previous three examples of closures demonstrate the format and how to define a typical closure. Those who are familiar with Objective-C can see that the format of closures in Swift is a lot cleaner and easier to use. The syntax for creating closures that we have shown so far in this article is pretty short; however, we can shorten it even more. In this next section, we will look at how to do this. Shorthand syntax for closures In this section, we will look at a couple of ways to shorten the definition of closures. Using the shorthand syntax for closures is really a matter of personal preference. There are a lot of developers that like to make their code as small and compact as possible and they take great pride in doing so. However, at times, this can make code hard to read and understand by other developers. The first shorthand syntax for closures that we are going to look at is one of the most popular and is the syntax we saw when we were using algorithms with arrays. This format is mainly used when we want to send a really small (usually one line) closure to a function, like we did with the algorithms for arrays. Before we look at this shorthand syntax, we need to write a function that will accept a closure as a parameter: func testFunction(num: Int, handler:()->Void) { for _ in 0..< num { handler() } } This function accepts two parameters—the first parameter is an integer named num, and the second parameter is a closure named handler that does not have any parameters and does not return any value. Within the function, we create a for loop that will use the num integer to define how many times it loops. Within the for loop, we call the handler closure that was passed into the function. Now lets create a closure and pass it to the testFunction()such as this: let clos = { () -> Void in print("Hello from standard syntax") } testFunction(num: 5,handler: clos) This code is very easy to read and understand; however, it does take five lines of code. Now, let's look at how to shorten this code by writing the closure inline within the function call: testFunction(num: 5,handler: {print("Hello from Shorthand closure")}) In this example, we created the closure inline within the function call using the same syntax that we used with the algorithms for arrays. The closure is placed in between two curly brackets ({}), which means the code to create our closure is {print("Hello from Shorthand closure")}. When this code is executed, it will print out the message, Hello from Shorthand closure, five times on the screen. Let's look at how to use parameters with this shorthand syntax. We will begin by creating a new function that will accept a closure with a single parameter. We will name this function testFunction2. The following example shows what the new testFunction2 function does: func testFunction2(num: Int, handler:(name: String)->Void) { for _ in 0..< num { handler(name: "Me") } } In testFunction2, we define our closure such as this: (name: String)->Void. This definition means that the closure accepts one parameter and does not return any value. Now, let's see how to use the same shorthand syntax to call this function: testFunction2(num: 5,handler: {print("Hello from ($0)")}) The difference between this closure definition and the previous one is $0. The $0 parameter is shorthand for the first parameter passed into the function. If we execute this code, it prints out the message, Hello from Me, five times. Using the dollar sign ($) followed by a number with inline closures allows us to define the closure without having to create a parameter list in the definition. The number after the dollar sign defines the position of the parameter in the parameter list. Let's examine this format a bit more because we are not limited to only using the dollar sign ($) and number shorthand format with inline closures. This shorthand syntax can also be used to shorten the closure definition by allowing us to leave the parameter names off. The following example demonstrates this: let clos5: (String, String) ->Void = { print("($0) ($1)") } In this example, our closure has two string parameters defined; however, we do not give them names. The parameters are defined such as this: (String, String). We can then access the parameters within the body of the closure using $0 and $1. Also, note that closure definition is after the colon (:), using the same syntax that we use to define a variable type, rather than inside the curly brackets. When we use anonymous arguments, this is how we would define the closure. It will not be valid to define the closure such as this: let clos5b = { (String, String) -> Void in print("($0) ($1)") } In this example, we will receive the Anonymous closure arguments cannot be used inside a closure that has explicit arguments error. We will use the clos5 closure such as this: clos5("Hello","Kara") Since Hello is the first string in the parameter list, it is accessed with $0, and as Kara is the second string in the parameter list, it is accessed with $1. When we execute this code, we will see the message Hello Kara printed to the console. This next example is used when the closure doesn't return any value. Rather than defining the return type as Void, we can use parentheses, as the following example shows: let clos6: () -> () = { print("Howdy") } In this example, we define the closure as () -> (). This tells Swift that the closure does not accept any parameters and also does not return a value. We will execute this closure such as this: clos6() As a personal preference, I am not very fond of this shorthand syntax. I think the code is much easier to ready when the void keyword is used rather than the parentheses. We have one more shorthand closure example to demonstrate before we begin showing some really useful examples of closures. In this last example, we will demonstrate how we can return a value from the closure without the need to include the return keyword. If the entire closure body consists of only a single statement, then we can omit the return keyword, and the results of the statement will be returned. Let's take a look at an example of this: let clos7 = { (first: Int, second: Int) -> Int in first + second } In this example, the closure accepts two parameters of the Int type and will return an Int type. The only statement within the body of the closure adds the first parameter to the second parameter. However, if you notice, we do not include the return keyword before the addition statement. Swift will see that this is a single statement closure and will automatically return the results, just as if we put the return keyword before the addition statement. We do need to make sure the result type of our statement matches the return type of the closure. All of the examples that were shown in the previous two sections were designed to show how to define and use closures. On their own, these examples did not really show off the power of closures and they did not show how incredibly useful closures are. The remainder of this article is written to demonstrate the power and usefulness of closures in Swift. Using closures with Swift's array algorithms Now that we have a better understanding of closures, let's see how we can expand on these algorithms using more advanced closures. In this section, we will primarily be using the map algorithm for consistency purposes; however, we can use the basic ideas demonstrated with any of the algorithms. We will start by defining an array to use: let guests = ["Jon", "Kim", "Kailey", "Kara"] This array contains a list of names and the array is named guests. This array will be used for all the examples in this section, except for the very last ones. Now that we have our guests array, let's add a closure that will print a greeting to each of the names in the guests array: guests.map({ (name: String) -> Void in print("Hello (name)") }) Since the map algorithm applies the closure to each item of the array, this example will print out a greeting for each name within the guests array. After the first section in this article, we should have a pretty good understanding of how this closure works. Using the shorthand syntax that we saw in the last section, we could reduce the preceding example down to the following single line of code: guests.map({print("Hello ($0)")}) This is one of the few times, in my opinion, where the shorthand syntax may be easier to read than the standard syntax. Now, let's say that rather than printing the greeting to the console, we wanted to return a new array that contained the greetings. For this, we would have returned a string type from our closure, as shown in the following example: var messages = guests.map({ (name:String) -> String in return "Welcome (name)" }) When this code is executed, the messages array will contain a greeting to each of the names in the guests array while the guests array will remain unchanged. The preceding examples in this section showed how to add a closure to the map algorithm inline. This is good if we only had one closure that we wanted to use with the map algorithm, but what if we had more than one closure that we wanted to use, or if we wanted to use the closure multiple times or reuse them with different arrays. For this, we could assign the closure to a constant or variable and then pass in the closure, using its constant or variable name, as needed. Let's see how to do this. We will begin by defining two closures. One of the closures will print a greeting for each name in the guests array, and the other closure will print a goodbye message for each name in the guests array: let greetGuest = { (name:String) -> Void in print("Hello guest named (name)") } let sayGoodbye = { (name:String) -> Void in print("Goodbye (name)") } Now that we have two closures, we can use them with the map algorithm as needed. The following code shows how to use these closures interchangeably with the guests array: guests.map(greetGuest) guests.map(sayGoodbye) Whenever we use the greetGuest closure with the guests array, the greetings message is printed to the console, and whenever we use the sayGoodbye closure with the guests array, the goodbye message is printed to the console. If we had another array named guests2, we could use the same closures for that array, as shown in the following example: guests.map(greetGuest) guests2.map(greetGuest) guests.map(sayGoodbye) guests2.map(sayGoodbye) All of the examples in this section so far have either printed a message to the console or returned a new array from the closure. We are not limited to such basic functionality in our closures. For example, we can filter the array within our closure, as shown in the following example: let greetGuest2 = { (name:String) -> Void in if (name.hasPrefix("K")) { print("(name) is on the guest list") } else { print("(name) was not invited") } } In this example, we print out a different message depending on whether the name starts with the letter K or not. As we mentioned earlier in the article, closures have the ability to capture and store references to any variable or constant from the context in which they were defined. Let's look at an example of this. Let's say that we have a function that contains the highest temperature for the last seven days at a given location, and this function accepts a closure as a parameter. This function will execute the closure on the array of temperature. The function can be written such as this: func temperatures(calculate:(Int)->Void) { var tempArray = [72,74,76,68,70,72,66] tempArray.map(calculate) } This function accepts a closure defined as (Int)->Void. We then use the map algorithm to execute this closure for each item of the tempArray array. The key to using a closure correctly in this situation is to understand that the temperatures function does not know or care what goes on inside the calculate closure. Also, be aware that the closure is also unable to update or change the items within the function's context, which means that the closure cannot change any other variable within the temperature's function; however, it can update variables in the context that it was created in. Let's look at the function that we will create the closure in. We will name this function testFunction. Let's take a look at the following code: func testFunction() { var total = 0 var count = 0 let addTemps = { (num: Int) -> Void in total += num count += 1 } temperatures(calculate: addTemps) print("Total: (total)") print("Count: (count)") print("Average: (total/count)") } In this function, we begin by defining two variables named total and count, where both variables are of the Int type. We then create a closure named addTemps that will be used to add all of the temperatures from the temperatures function together. The addTemps closure will also count how many temperatures there are in the array. To do this, the addTemps closure calculates the sum of each item in the array and keeps the total in the total variable that was defined at the beginning of the function. The addTemps closure also keeps track of the number of items in the array by incrementing the count variable for each item. Notice that neither the total nor count variables are defined within the closure; however, we are able to use them within the closure because they were defined in the same context as the closure. We then call the temperatures function and pass it the addTemps closure. Finally, we print the total, count, and average temperature to the console. When the testFunction is executed, we see the following output to the console: Total: 498 Count: 7 Average: 71 As we can see from the output, the addTemps closure is able to update and use items that are defined within the context that it was created in, even when the closure is used in a different context. Now that we have looked at using closures with the array map algorithm, let's look at using closures by themselves. We will also look at the ways we can clean up our code to make it easier to read and use. Changing functionality Closures also give us the ability to change the functionality of classes on the fly. With closures, we are able to write functions and classes whose functionality can change, based on the closure that is passed into it as a parameter. In this section, we will show how to write a function whose functionality can be changed with a closure. Let's begin by defining a class that will be used to demonstrate how to swap out functionality. We will name this class TestClass: class TestClass { typealias getNumClosure = ((Int, Int) -> Int) var numOne = 5 var numTwo = 8 var results = 0 func getNum(handler: getNumClosure) -> Int { results = handler(numOne,numTwo) return results } } We begin this class by defining a type alias for our closure that is namedgetNumClosure. Any closure that is defined as a getNumClosure closure will take two integers and return an integer. Within this closure, we assume that it does something with the integers that we pass in to get the value to return, but it really doesn't have to. To be honest, this class doesn't really care what the closure does as long as it conforms to the getNumClosure type. Next, we define three integers that are named numOne, NumTwo, and results. We also define a method named getNum(). This method accepts a closure that confirms the getNumClosure type as its only parameter. Within the getNum() method, we execute the closure by passing in the numOne and numTwo class variables, and the integer that is returned is put into the results class variable. Now, let's look at several closures that conform to the getNumClosure type that we can use with the getNum() method: var max: TestClass.getNumClosure = { if $0 > $1 { return $0 } else { return $1 } } var min: TestClass.getNumClosure = { if $0 < $1 { return $0 } else { return $1 } } var multiply: TestClass.getNumClosure = { return $0 * $1 } var second: TestClass.getNumClosure = { return $1 } var answer: TestClass.getNumClosure = { var tmp = $0 + $1 return 42 } In this code, we define five closures that conform to the getNumClosure type: max: This returns the maximum value of the two integers that are passed in min: This returns the minimum value of the two integers that are passed in multiply: This multiplies both the values that are passed in and returns the product second: This returns the second parameter that was passed in answer: This returns the answer to life, the universe, and everything In the answer closure, we have an extra line that looks like it does not have a purpose: var tmp = $0 + $1. We do this purposely because the following code is not valid: var answer: TestClass.getNumClosure = { return 42 } This class gives us the error: contextual type for closure argument list expects 2 arguments, which cannot be implicitly ignored error. As we can see by the error, Swift does not think that our closure accepts any parameters unless we use $0 and $1 within the body of the closure. In the closure named second, Swifts assumes that there are two parameters because $1 specifies the second parameter. We can now pass each one of these closures to the getNum method of our TestClass to change the functionality of the function to suit our needs. The following code illustrates this: var myClass = TestClass() myClass.getNum(handler: max) myClass.getNum(handler: min) myClass.getNum(handler: multiply) myClass.getNum(handler: second) myClass.getNum(handler: answer) When this code is run, we will receive the following results for each of the closures: max: results = 8 min: results = 5 multiply: results = 40 second: results = 8 answer: results = 42 The last example we are going to show you in this article is one that is used a lot in frameworks, especially the ones that have a functionality that is designed to be run asynchronously. Selecting a closure based on results In the final example, we will pass two closures to a method, and then depending on some logic, one, or possibly both, of the closures will be executed. Generally, one of the closures is called if the method was successfully executed and the other closure is called if the method failed. Let's start off by creating a class that will contain a method that will accept two closures and then execute one of the closures based on the defined logic. We will name this class TestClass. Here is the code for the TestClass class: class TestClass { typealias ResultsClosure = ((String) -> Void) func isGreater(numOne: Int, numTwo:Int, successHandler: ResultsClosure, failureHandler: ResultsClosure) { if numOne > numTwo { successHandler("(numOne) is greater than (numTwo)") } else { failureHandler("(numOne) is not greater than (numTwo)") } } } We begin this class by creating a type alias that defines the closure that we will use for both the successful and failure closures. We will name this type alias ResultsClosure. This example also illustrates why to use a type alias, rather than retyping the closure definition. It saves us a lot of typing and also prevents us from making mistakes. In this example, if we did not use a type alias, we would need to retype the closure definition four times, and if we needed to change the closure definition, we would need to change it in four spots. With the type alias, we only need to type the closure definition once and then use the alias throughout the remaining code. We then create a method named isGreater that takes two integers as the first two parameters and then two closures as the next two parameters. The first closure is named successHandler, and the second closure is named failureHandler. Within the isGreater method, we check whether the first integer parameter is greater than the second one. If the first integer is greater, the successHandler closure is executed; otherwise, the failureHandler closure is executed. Now, let's create two of our closures. The code for these two closures is as follows: var success: TestClass. ResultsClosure = { print("Success: ($0)") } var failure: TestClass. ResultsClosure = { print("Failure: ($0)") } Note that both closures are defined as the TestClass.ResultsClosure type. In each closure, we simply print a message to the console to let us know which closure was executed. Normally, we would put some functionality in the closure. We will then call the method with both the closures such as this: var test = TestClass() test.isGreater(numOne: 8, numTwo: 6, successHandler:success, failureHandler:failure) Note that in the method call, we are sending both the success closure and the failure closure. In this example, we will see the message, Success: 8 is greater than 6. If we reversed the numbers, we would see the message, Failure: 6 is not greater than 8. This use case is really good when we call asynchronous methods, such as loading data from a web service. If the web service call was successful, the success closure is called; otherwise, the failure closure is called. One big advantage of using closures such as this is that the UI does not freeze while we wait for the web service call to complete. As an example, try to retrieve data from a web service such as this: var data = myWebClass.myWebServiceCall(someParameter) Our UI would freeze while we wait for the response to come back, or we would have to make the call in a separate thread so that the UI would not hang. With closures, we pass the closures to the networking framework and rely on the framework to execute the appropriate closure when it is done. This does rely on the framework to implement concurrency correctly to make the calls asynchronously, but a decent framework should handle that for us. Creating strong reference cycles with closures Earlier in this article, we said, the best thing is, for the most part, Swift will handle the memory management for us. The for the most part section of the quote means that if everything is written in a standard way, Swift will handle the memory management of the closures for us. However, there are times where memory management fails us. Memory management will work correctly for all of the examples that we have seen in this article so far. It is possible to create a strong reference cycle that prevents Swift's memory management from working correctly. Let's look at what happens if we create a strong reference cycle with closures. A strong reference cycle may happen if we assign a closure to a property of a class instance and within that closure, we capture the instance of the class. This capture occurs because we access a property of that particular instance using self, such as self.someProperty, or we assign self to a variable or constant, such as let c = self. By capturing a property of the instance, we are actually capturing the instance itself, thereby creating a strong reference cycle where the memory manager will not know when to release the instance. As a result, the memory will not be freed correctly. Let's begin by creating a class that has a closure and an instance of the String type as its two properties. We will also create a type alias for the closure type in this class and define a deinit() method that prints a message to the console. The deinit() method is called when the class gets released and the memory is freed. We will know when the class gets released when the message from the deinit() method is printed to the console. This class will be named TestClassOne. Let's take a look at the following code: class TestClassOne { typealias nameClosure = (() -> String) var name = "Jon" lazy var myClosure: nameClosure = { return self.name } deinit { print("TestClassOne deinitialized") } } Now, let's create a second class that will contain a method that accepts a closure that is of the nameClosure type that was defined in the TestClassOne class. This class will also have a deinit() method, so we can also see when it gets released. We will name this class TestClassTwo. Let's take a look at the following code: class TestClassTwo { func closureExample(handler: TestClassOne.nameClosure) { print(handler()) } deinit { print("TestClassTwo deinitialized") } } Now, let's see this code in action by creating instances of each class and then trying to manually release the instance by setting them to nil: var testClassOne: TestClassOne? = TestClassOne() var testClassTwo: TestClassTwo? = TestClassTwo() testClassTwo?.closureExample(testClassOne!.myClosure) testClassOne = nil print("testClassOne is gone") testClassTwo = nil print("testClassTwo is gone") What we do in this code is create two optionals that may contain an instance of our two test classes or nil. We need to create these variables as optionals because we will be setting them to nil later in the code so that we can see whether the instances are released properly. We then call the closureExample() method of the TestClassTwo instance and pass it the myClosure property from the TestClassOne instance. We now try to release the TestClassOne and TestClassTwo instances by setting them to nil. Keep in mind that when an instance of a class is released, it attempts to call the deinit() method of the class, if it exists. In our case, both classes have a deinit() method that prints a message to the console, so we know when the instances are actually released. If we run this project, we will see the following messages printed to the console: testClassOne is gone TestClassTwo deinitialized testClassTwo is gone As we can see, we do attempt to release the TestClassOne instances, but the deinit() method of the class is never called, indicating that it was not actually released; however, the TestClassTwo instance was properly released because the deinit() method of that class was called. To see how this is supposed to work without the strong reference cycle, change the myClosure closure to return a string type that is defined within the closure itself, as shown in the following code: lazy var myClosure: nameClosure = { return "Just Me" } Now, if we run the project, we should see the following output: TestClassOne deinitialized testClassOne is gone TestClassTwo deinitialized testClassTwo is gone This shows that the deinit() methods from both the TestClassOne and TestClassTwo instances were properly called, indicating that they were both released properly. In the first example, we capture an instance of the TestClassOne class within the closure because we accessed a property of the TestClassOne class using self.name. This created a strong reference from the closure to the instance of the TestClassOne class, preventing memory management from releasing the instance. Swift does provide a very easy and elegant way to resolve strong reference cycles in closures. We simply need to tell Swift not to create a strong reference by creating a capture list. A capture list defines the rules to use when capturing reference types within a closure. We can declare each reference to be a weak or un owned reference rather than a strong reference. A weak keyword is used when there is the possibility that the reference will become nil during its lifetime; therefore, the type must be an optional. The unowned keyword is used when there is not a possibility of the reference becoming nil. We define the capture list by pairing the weak or unowned keywords with a reference to a class instance. These pairings are written within square brackets ([ ]). Therefore, if we update the myClosure closure and define an unowned reference to self, we should eliminate the strong reference cycle. The following code shows what the new myClosure closure will look similar to: lazy var myClosure: nameClosure = { [unowned self] in return self.name } Notice the new line—[unowned self] in. This line says that we do not want to create a strong reference to the instance of self. If we run the project now, we should see the following output: TestClassOne deinitialized testClassOne is gone TestClassTwo deinitialized testClassTwo is gone This shows that both the TestClassOne and TestClassTwo instances were properly released. Summary In this article, we saw that we can define a closure just like we can define an Int or String type. We can assign closures to a variable, pass them as an argument to functions, and also return them from functions. Closures capture a store references to any constants or variables from the context in which the closure was defined. We have to be careful with this functionality to make sure that we do not create a strong reference cycle, which would lead to memory leaks in our applications. Swift closures are very similar to blocks in Objective-C, but they have a much cleaner and eloquent syntax. This makes them a lot easier to use and understand. Having a good understanding of closures is vital to mastering the Swift programming language and will make it easier to develop great applications that are easy to maintain. They are also essential for creating first class frameworks that are both easy to use and maintain. The three use cases that we saw in this article are by no means the only three useful uses for closures. I can promise you that the more you use closures in Swift, the more uses you will find for them. Closures are definitely one of the most powerful and useful features of the Swift language, and Apple did a great job by implementing them in the language. Resources for Article: Further resources on this subject: Revisiting Linux Network Basics [article] An Introduction to the Terminal [article] Veil-Evasion [article]
Read more
  • 0
  • 0
  • 28184
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-our-first-program
Packt
03 Jan 2017
5 min read
Save for later

Our First Program!

Packt
03 Jan 2017
5 min read
In this article by Syed Omar Faruk Towaha, author of Learning C for Arduino, we will be learning how we can connect our Arduino to our system and we will learn how to write program on the Arduino IDE. (For more resources related to this topic, see here.) First connect your A-B cable to your PC and then connect the cable to the PC. Your PC will make a sound which will confirm that the Arduino has connected to the PC. Now open the Arduino IDE. From menu go to Tools | Board:"Arduino/Genuino Uno". You can select any board you have bought from the list. See the following screenshot for the list: You have to select the port on which the Arduino is connected. There are a lot of things you can do to see on which port your Arduino is connected. Hello Arduino! Let's write our first program on the Arduino IDE. Go to File and click on New. A text editor will open with few lines of code. Delete those lines first, and then type the following code: void setup() { Serial.begin(9600); } void loop() { Serial.print("Hello Arduino!n"); } From menu go to Sketch and click Upload. It is a good practice to verify or compile the code before uploading to the Arduino board. To verify or compile the code you need to go to Sketch and click Verify/Compile. You will see a message Done compiling. on the bottom of the IDE if the code is error free. See the following figure for the explanation: After the successful upload, you need to open the serial monitor of the Arduino IDE. To open the serial monitor, you need to go to Tools and click on Serial Monitor. You will see the following screen: setup() function The setup() function helps to initialize variables, set pin modes, and we can also use libraries here. This function is called first when we compile or upload the whole code. The setup() runs only for the first time it is uploaded to the board and later it runs every time we press the reset button on the Arduino. On our code we used Serial.begin(9600); which sets the data rate per second for the serial communication. We already know that serial communication is the process by which we send data one bit at a time over a communication channel. For Arduino to communicate with the computer we used the data rate 9600 which is a standard data rate. This is also known as baud rate. We can set the baud rate any of the following depending on the connection speed and type. I will recommend using 9600 for general data communication purposes. If you use higher baud rate, the characters we print might be broken: 300 1200 2400 4800 9600 19200 38400 57600 74880 115200 230400 250000 If we do not declare the baud rate inside the setup() function there will be no output on the serial monitor. Baud rate 9600 means 960 characters per second. This is because in serial communication, you need to transmit one start bit, eight data bits and one stop bit, for a total of 10 bits at a speed of 9600 bits per second. loop() function loop() function will let the code run again and again until we disconnect the Arduino. We have written a print statement on the loop() function which will execute for the infinity time. To print something on the serial monitor we need to write the following line: Serial.print("Anything We Want To Print"); Between the quotations we can write anything we want to print. On the last code we have written Serial.print("Hello Arduino!n");. That's why on the serial monitor we have seen Hello Arduino! had been printing for an infinity time. We have used n after Hello Arduino!.This is called escape sequence. For now, just remember we need to put this after each of our line inside the print statement to break a line and print the next command on the net line. We can use Serial.println("Hello Arduino!"); instead of Serial.print("Hello Arduino!n");. Both will give the same result. We need to put Serial.println("Hello Arduino!"); inside the setup() function. Now let's see what happens if we put a print statement inside the setup() function. Have a look at the following figure. Hello Arduino! is printed for only one time: Since C is a case sensitive language we need to be careful about the casing. We cannot use serial.print("Hello Arduino!"); instead of Serial.print("Hello Arduino!n");. Summary In this article we have learned how to connect the Arduino to our system and uploaded the code to our board. We have learned how the Arduino code work. If you are new to Arduino this article is most important. Resources for Article: Further resources on this subject: Zabbix Configuration [article] A Configuration Guide [article] Thorium and Salt API [article]
Read more
  • 0
  • 0
  • 15024

article-image-building-extensible-chat-bot-using-javascript-yaml
Andrea Falzetti
03 Jan 2017
6 min read
Save for later

Building an extensible Chat Bot using JavaScript & YAML

Andrea Falzetti
03 Jan 2017
6 min read
In this post, I will share with you my experience in designing and coding an extensible chat bot using YAML and JavaScript. I will show you that it's not always required to use AI, ML, or NPL to make a great bot. This won't be a step-by-step guide. My goal is to give you an overview of a project like this and inspire you to build your own. Getting Started The concept I will offer you consists of creating a collection of YAML scripts that will represent all the possible conversations that the bot can handle. When a new conversation starts, the YAML code gets converted to a JavaScript object that we can easily manage in Node.js. I have used WebSockets implemented with socket.io to transmit the messages back and forth. First, we need to agree on a set of commands that can be used to create the YAML scripts, let’s start with some essentials: messages: To send an array of messages from the bot to the user input: To ask the user for some data next: The action we want to perform next within the script script: The next script that will follow in the conversation with the user when, empty, not-empty: conditional statements YAML script example A sample script will look like the following:   Link to the gist Then we need to implement those commands in our bot engine. The Bot engine Using the WebSockets I send to the bot the name of the script that I want to play, the bot engine loads it and converts it to a JavaScript object. If the client doesn't know which script to play first, it can call a default script called “hello” that will introduce the bot. In each script, the first action to run will be the one with index 0. As per the example above, the first thing the bot will do is sending two messages to the user:   With the command next we jump to the next block, index = 1. Again we send another message and immediately after an input request.   At this point, the client receives the input request and allows the user to type the answer. Submitted the value, we send the data back to the bot that will append the information to a key-value store, where all data is received from the user live and is accessible via a key (for example,user_name). Using the when statement, we define conditional branches of the conversation. When dealing with data validation this is often this case. In the example, we want to make sure to get a valid name from the user, so if the value received is empty, we jump back in the script and ask for it again, continuing with the following steponly when the name is valid. Finally, the bot will send a message, this time containing a variable previously received and stored in the key-value store.   In the next script, we will see how to handle multiple choice and buttons, to allow the user to make a decision.   Link to the gist The conversation will start with a message from the bot,followed by an input request, with input type set as buttons_single_select which in the in client it translates to “display multiple buttons” using an array of options received with the input request:   When the user clicks on one of the options, the UI sends back the choice of the user to the bot which will eventually match it with one of the existing branches. Found the correct branch the bot engine will look for the next command to run, in this case is another another input request, this time expecting to receive an image from the client.   Once the file has been sent, the conversation ends just after the bot sends a last message confirming the file got uploaded successfully. Using YAML gives you the flexibility to build many different conversations and also allows you to easily implement A/B testing of your conversation. Implementing the bot engine with JavaScript / Node.js To build something able to play the YAML scripts above, you need to iterate the script commands until you find an explicit end command. It’s very important to keep in memory the index of current command in progress, so that you can move on as soon as the current task completes. When you meet a new command, you should pass it to a resolver that knows what each command does and is able to run the specific portion of code or function. Additionally, you will need a function that listens to the input received from the clients, validating and saving it into a key-value store. Extending the command set This approach allows you to create a large set of commands, that do different things including queries, Ajax requests, API calls to external services, etc. You can combine your command with a when statement so that a callback or promise can evolve in its specific branch depending on the result you got. Conclusion If you are wondering where the demo images come from, they are a screenshot of a React view built with the help of devices.css, a CSS package that provides the flat shape of iPhone, Android, and Windows phones in different colors only using pure CSS. I have built this view to test the bot, using socket.io-client for the WebSockets and React for the UI. This is not just a proof of concept; I am working on a project where we have currently implemented this logic. I invite you to review, think about it and leave a feedback. Thanks! About the author Andrea Falzetti is an enthusiastic full-stack developer based in London. He has been designing and developing web applications for over 5 years. He is currently focused on node.js, react, microservices architecture, serverless, conversational UI, chat bots and machine learning. He is currently working at Activate Media, where his role is to estimate, design and lead the development of web and mobile platforms.
Read more
  • 0
  • 0
  • 22245

article-image-digging-deeper
Packt
03 Jan 2017
6 min read
Save for later

Digging Deeper

Packt
03 Jan 2017
6 min read
In the article by, Craig Clayton, author of the book, iOS 10 Programming for Beginners, we went over the basics of Swift to get you warmed up. Now, we will dig deeper and learn some more programming concepts. These concepts will build on what you have already learned. In this article, we will cover the following topics: Ranges Control flow (For more resources related to this topic, see here.) Creating a Playground project Please launch Xcode and click on Get started with a playground. The options screen for creating a new playground screen will appear: Please name your new Playground SwiftDiggingDeeper and make sure that your Platform is set to iOS. Now, let's delete everything inside of your file and toggle on the Debug panel using either the toggle button or Cmd + Shift + Y. Your screen should look like mine: Ranges These generic data types represent a sequence of numbers. Let's look at the image below to understand: Closed range Notice that, in the image above, we have numbers ranging from 10 to 20. Rather than having to write each value, we can use ranges to represent all of these numbers in shorthand form. In order to do this, let's remove all of the numbers in the image except 10 and 20. Now that we have removed those numbers, we need a way to tell Swift that we want to include all of the numbers that we just deleted. This is where the range operator (…) comes into play. So, in Playgrounds, let's create a constant called range and set it equal to 10...20: let range = 10...20 The range that we just entered says that we want the numbers between 10 and 20 as well as both 10 and 20 themselves. This type of range is known as a closed range. We also have what is called a half closed range. Half closed range Let's make another constant called half closed range and set it equal to 10..<20: let halfClosedRange = 10..<20 A half closed range is the same as a closed range except that the end value will not be included. In this example, that means that 10 through 19 will be included and 20 will be excluded. At this point, you will notice that your results panel just shows you CountableClosedRange(10...20) and CountableRange(10..<20). We cannot see all the numbers within the range. In order to see all the numbers, we need to use a loop. Control flow In Swift, we use a variety of control statements. For-In Loop One of the most common control statements is a for-in loop. A for-in loop allows you to iterate over each element in a sequence. Let's see what a for-loop looks like: for <value> in <sequence> { // Code here } So, we start our for-in loop with for, which is followed by <value>. This is actually a local constant (only the for-in loop can access it) and can be any name you like. Typically, you will want to give this value an expressive name. Next, we have in, which is followed by <sequence>. This is where we want to give it our sequence of numbers. Let's write the following into Playgrounds: for value in range { print("closed range - (value)") } Notice that, in our debug panel, we see all of the numbers we wanted in our range: Let's do the same for our variable halfClosedRange by adding the following: for index in halfClosedRange { print("half closed range - (index)") } In our debug panel, we see that we get the numbers 10 through 19. One thing to note is that these two for-in loops have different variables. In the first loop, we used value, and in the second one, we used index. You can make these whatever you choose. In addition, in the two examples, we used constants, but we could actually just use the ranges within the loop. Please add the following: for index in 0...3 { print("range inside - (index)") } Now, you will see 0 to 3 print inside the debug panel: What if you wanted numbers to go in reverse order? Let's input the following for-in loop: for index in (10...20).reversed() { print("reversed range - (index)") } We now have the numbers in descending order in our debug panel. When we add ranges into a for-in loop, we have to wrap our range inside parentheses so that Swift recognizes that our period before reversed() is not a decimal. The while Loop A while loop executes a Boolean expression at the start of the loop, and the set of statements run until a condition becomes false. It is important to note that while loops can be executed zero or more times. Here is the basic syntax of a while loop: while <condition> { // statement } Let's write a while loop in Playgrounds and see how it works. Please add the following: var y = 0 while y < 50 { y += 5 print("y:(y)") } So, this loop starts with a variable that begins at zero. Before the while loop executes, it checks to see if y is less than 50; and, if so, it continues into the loop. By using the += operator, we increment y by 5 each time. Our while loop will continue to do this until y is no longer less than 50. Now, let's add the same while loop after the one we created and see what happens: while y < 50 { y += 5 print("y:(y)") } You will notice that the second while loop never runs. This may not seem like it is important until we look at our next type of loop. The repeat-while loop A repeat-while loop is pretty similar to a while loop in that it continues to execute the set of statements until a condition becomes false. The main difference is that the repeat-while loop does not evaluate its Boolean condition until the end of the loop. Here is the basic syntax of a repeat-while loop: repeat { // statement } <condition> Let's write a repeat-while loop in Playgrounds and see how it works. Type the following into Playgrounds: var x = 0 repeat { x += 5 print("x:(x)") } while x < 100 print("repeat completed x: (x)") So, our repeat-while loop executes first and increments x by 5, and afterwards (as opposed to checking the condition before as with a while loop), it checks to see if x is less than 100. This means that our repeat-while loop will continue until the condition hits 100. But here is where it gets interesting. Let's add another repeat-while loop after the one we just created: repeat { x += 5 print("x:(x)") } while x < 100 print("repeat completed again x: (x)") This time, the repeat…while loop incremented to 105. This happens because the Boolean expression does not get evaluated until after it is incremented by 5. Knowing this behavior will help you pick the right loop for your situation. So far, we have looked at three loops: the for-in loop, the while loop, and the repeat-while loop. We will use the for-in loop again, but first we need to talk about collections. Summary The article summarizes Ranges and control flow using Xcode 8. Resources for Article: Further resources on this subject: Tools inTypeScript [article] Design with Spring AOP [article] Thinking Functionally [article]
Read more
  • 0
  • 0
  • 29494

article-image-dimensionality-reduction
Packt
03 Jan 2017
15 min read
Save for later

Dimensionality Reduction

Packt
03 Jan 2017
15 min read
In this article by Ashish Kumar and Avinash Paul the authors of the book Mastering Text Mining with R, we will Data volume and high dimensions pose an astounding challenge in text mining tasks. Inherent noise and the computational cost of processing huge amount of datasets make it even more sarduous. The science of dimensionality reduction lies in the art of losing out on only a commensurately small amount of information and still being able to reduce the high dimension space into a manageable proportion. (For more resources related to this topic, see here.) For classification and clustering techniques to be applied to text data, for different natural language processing activities, we need to reduce the dimensions and noise in the data so that each document can be represented using fewer dimensions, thus significantly reducing the noise that can hinder the performance. The curse of dimensionality Topic modeling and document clustering are common text mining activities, but the text data can be very high-dimensional, which can cause a phenomenon called curse of dimensionality. Some literature also calls it concentration of measure: Distance is attributed to all the dimensions and assumes each of them to have the same effect on the distance. The higher the dimensions, the more similar things appear to each other. The similarity measures do not take into account the association of attributes, which may result in inaccurate distance estimation. The number of samples required per attribute increases exponentially with the increase in dimensions. A lot of dimensions might be highly correlated with each other, thus causing multi-collinearity. Extra dimensions cause a rapid volume increase that can result in high sparsity, which is a major issue in any method that requires statistical significance. Also, it causes huge variance in estimates, near duplicates, and poor predictors. Distance concentration and computational infeasibility Distance concentration is a phenomenon associated with high-dimensional space wherein pairwise distances or dissimilarity between points appear indistinguishable. All the vectors in high dimensions appear to be orthogonal to each other. The distances between each data point to its neighbors, farthest or nearest, become equal. This totally jeopardizes the utility of methods that use distance based measures. Let's consider that the number of samples is n and the number of dimensions is d. If d is very large, the number of samples may prove to be insufficient to accurately estimate the parameters. For the datasets with number of dimensions d, the number of parameters in the covariance matrix will be d^2. In an ideal scenario, n should be much larger than d^2, to avoid overfitting. In general, there is an optimal number of dimensions to use for a given fixed number of samples. While it may feel like a good idea to engineer more features, if we are not able to solve a problem with less number of features. But the computational cost and model complexity increases with the rise in number of dimensions. For instance, if n number of samples look to be dense enough for a one-dimensional feature space. For a k-dimensional feature space, n^k samples would be required. Dimensionality reduction Complex and noisy characteristics of textual data with high dimensions can be handled by dimensionality reduction techniques. These techniques reduce the dimension of the textual data while still preserving its underlying statistics. Though the dimensions are reduced, it is important to preserve the inter-document relationships. The idea is to have minimum number of dimensions, which can preserve the intrinsic dimensionality of the data. A textual collection is mostly represented in form of a term document matrix wherein we have the importance of each term in a document. The dimensionality of such a collection increases with the number of unique terms. If we were to suggest the simplest possible dimensionality reduction method, that would be to specify the limit or boundary on the distribution of different terms in the collection. Any term that occurs with a significantly high frequency is not going to be informative for us, and the barely present terms can undoubtedly be ignored and considered as noise. Some examples of stop words are is, was, then, and the. Words that generally occur with high frequency and have no particular meaning are referred to as stop words. Words that occur just once or twice are more likely to be spelling errors or complicated words, and hence both these and stop words should not be considered for modeling the document in the Term Document Matrix (TDM). We will discuss a few dimensionality reduction techniques in brief and dive into their implementation using R. Principal component analysis Principal component analysis (PCA) reveals the internal structure of a dataset in a way that best explains the variance within the data. PCA identifies patterns to reduce the dimensions of the dataset without significant loss of information. The main aim of PCA is to project a high-dimensional feature space into a smaller subset to decrease computational cost. PCA helps in computing new features, which are called principal components; these principal components are uncorrelated linear combinations of the original features projected in the direction of higher variability. The important point is to map the set of features into a matrix, M, and compute the eigenvalues and eigenvectors. Eigenvectors provide simpler solutions to problems that can be modeled using linear transformations along axes by stretching, compressing, or flipping. Eigenvalues provide the length and magnitude of eigenvectors where such transformations occur. Eigenvectors with greater eigenvalues are selected in the new feature space because they enclose more information than eigenvectors with lower eigenvalues for a data distribution. The first principle component has the greatest possible variance, that is, the largest eigenvalues compared with the next principal component uncorrelated, relative to the first PC. The nth PC is the linear combination of the maximum variance that is uncorrelated with all previous PCs. PCA comprises the following steps: Compute the n-dimensional mean of the given dataset. Compute the covariance matrix of the features. Compute the eigenvectors and eigenvalues of the covariance matrix. Rank/sort the eigenvectors by descending eigenvalue. Choose x eigenvectors with the largest eigenvalues. Eigenvector values represent the contribution of each variable to the principal component axis. Principal components are oriented in the direction of maximum variance in m-dimensional space. PCA is one of the most widely used multivariate methods for discovering meaningful, new, informative, and uncorrelated features. This methodology also reduces dimensionality by rejecting low-variance features and is useful in reducing the computational requirements for classification and regression analysis. Using R for PCA R also has two inbuilt functions for accomplishing PCA: prcomp() and princomp(). These two functions expect the dataset to be organized with variables in columns and observations in rows and has a structure like a data frame. They also return the new data in the form of a data frame, and the principal components are given in columns. prcomp() and princomp() are similar functions used for accomplishing PCA; they have a slightly different implementation for computing PCA. Internally, the princomp() function performs PCA using eigenvectors. The prcomp() function uses a similar technique known as singular value decomposition (SVD). SVD has slightly better numerical accuracy, so prcomp() is generally the preferred function. princomp() fails in situations if the number of variables is larger than the number of observations. Each function returns a list whose class is prcomp() or princomp(). The information returned and terminology is summarized in the following table: prcomp() princomp() Explanation sdev sdev Standard deviation of each column Rotations Loading Principle components Center Center Subtracted value of each row or column to get the center data Scale Scale Scale factors used X Score The rotated data   n.obs Number of observations of each variable   Call The call to function that created the object Here's a list of the functions available in different R packages for performing PCA: PCA(): FactoMineR package acp(): amap package prcomp(): stats package princomp(): stats package dudi.pca(): ade4 package pcaMethods: This package from Bioconductor has various convenient  methods to compute PCA Understanding the FactoMineR package FactomineR is a R package that provides multiple functions for multivariate data analysis and dimensionality reduction. The functions provided in the package not only deals with quantitative data but also categorical data. Apart from PCA, correspondence and multiple correspondence analyses can also be performed using this package: library(FactoMineR) data<-replicate(10,rnorm(1000)) result.pca = PCA(data[,1:9], scale.unit=TRUE, graph=T) print(result.pca) Results for the principal component analysis (PCA). The analysis was performed on 1,000 individuals, described by nine variables. The results are available in the following objects: Name Description $eig Eigenvalues $var Results for the variables $var$coord coord. for the variables $var$cor Correlations variables - dimensions $var$cos2 cos2 for the variables $var$contrib Contributions of the variables $ind Results for the individuals $ind$coord coord. for the individuals $ind$cos2 cos2 for the individuals $ind$contrib Contributions of the individuals $call Summary statistics $call$centre Mean of the variables $call$ecart.type Standard error of the variables $call$row.w Weights for the individuals $call$col.w Weights for the variables Eigenvalue percentage of variance cumulative percentage of variance: comp 1 1.1573559 12.859510 12.85951 comp 2 1.0991481 12.212757 25.07227 comp 3 1.0553160 11.725734 36.79800 comp 4 1.0076069 11.195632 47.99363 comp 5 0.9841510 10.935011 58.92864 comp 6 0.9782554 10.869505 69.79815 comp 7 0.9466867 10.518741 80.31689 comp 8 0.9172075 10.191194 90.50808 comp 9 0.8542724 9.491916 100.00000 Amap package Amap is another package in the R environment that provides tools for clustering and PCA. It is an acronym for Another Multidimensional Analysis Package. One of the most widely used functions in this package is acp(), which does PCA on a data frame. This function is akin to princomp() and prcomp(), except that it has slightly different graphic represention. For more intricate details, refer to the CRAN-R resource page: https://cran.r-project.org/web/packages/lLibrary(amap/amap.pdf Library(amap acp(data,center=TRUE,reduce=TRUE) Additionally, weight vectors can also be provided as an argument. We can perform a robust PCA by using the acpgen function in the amap package: acpgen(data,h1,h2,center=TRUE,reduce=TRUE,kernel="gaussien") K(u,kernel="gaussien") W(x,h,D=NULL,kernel="gaussien") acprob(x,h,center=TRUE,reduce=TRUE,kernel="gaussien") Proportion of variance We look to construct components and to choose from them, the minimum number of components, which explains the variance of data with high confidence. R has a prcomp() function in the base package to estimate principal components. Let's learn how to use this function to estimate the proportion of variance, eigen facts, and digits: pca_base<-prcomp(data) print(pca_base) The pca_base object contains the standard deviation and rotations of the vectors. Rotations are also known as the principal components of the data. Let's find out the proportion of variance each component explains: pr_variance<- (pca_base$sdev^2/sum(pca_base$sdev^2))*100 pr_variance [1] 11.678126 11.301480 10.846161 10.482861 10.176036 9.605907 9.498072 [8] 9.218186 8.762572 8.430598 pr_variance signifies the proportion of variance explained by each component in descending order of magnitude. Let's calculate the cumulative proportion of variance for the components: cumsum(pr_variance) [1] 11.67813 22.97961 33.82577 44.30863 54.48467 64.09057 73.58864 [8] 82.80683 91.56940 100.00000 Components 1-8 explain the 82% variance in the data. Singular vector decomposition Singular vector decomposition (SVD) is a dimensionality reduction technique that gained a lot of popularity in recent times after the famous Netflix Movie Recommendation challenge. Since its inception, it has found its usage in many applications in statistics, mathematics, and signal processing. It is primarily a technique to factorize any matrix; it can be real or a complex matrix. A rectangular matrix can be factorized into two orthonormal matrices and a diagonal matrix of positive real values. An m*n matrix is considered as m points in n-dimensional space; SVD attempts to find the best k dimensional subspace that fits the data: SVD in R is used to compute approximations of singular values and singular vectors of large-scale data matrices. These approximations are made using different types of memory-efficient algorithm, and IRLBA is one of them (named after Lanczos bi-diagonalization (IRLBA) algorithm). We shall be using the irlba package here in order to implement SVD. Implementation of SVD using R The following code will show the implementation of SVD using R: # List of packages for the session packages = c("foreach", "doParallel", "irlba") # Install CRAN packages (if not already installed) inst <- packages %in% installed.packages() if(length(packages[!inst]) > 0) install.packages(packages[!inst]) # Load packages into session lapply(packages, require, character.only=TRUE) # register the parallel session for registerDoParallel(cores=detectCores(all.tests=TRUE)) std_svd <- function(x, k, p=25, iter=0 1 ) { m1 <- as.matrix(x) r <- nrow(m1) c <- ncol(m1) p <- min( min(r,c)-k,p) z <- k+p m2 <- matrix ( rnorm(z*c), nrow=c, ncol=z) y <- m1 %*% m2 q <- qr.Q(qr(y)) b<- t(q) %*% m1 #iterations b1<-foreach( i=i1:iter ) %dopar% { y1 <- m1 %*% t(b) q1 <- qr.Q(qr(y1)) b1 <- t(q1) %*% m1 } b1<-b1[[iter]] b2 <- b1 %*% t(b1) eigens <- eigen(b2, symmetric=T) result <- list() result$svalues <- sqrt(eigens$values)[1:k] u1=eigens$vectors[1:k,1:k] result$u <- (q %*% eigens$vectors)[,1:k] result$v <- (t(b) %*% eigens$vectors %*% diag(1/eigens$values))[,1:k] return(result) } svd<- std_svd(x=data,k=5)) # singular vectors svd$svalues [1] 35.37645 33.76244 32.93265 32.72369 31.46702 We obtain the following values after running SVD using the IRLBA algorithm: d: approximate singular values. u: nu approximate left singular vectors v: nv approximate right singular vectors iter: # of IRLBA algorithm iterations mprod: # of matrix vector products performed These values can be used for obtaining results of SVD and understanding the overall statistics about how the algorithm performed. Latent factors # svd$u, svd$v dim(svd$u) #u value after running IRLBA [1] 1000 5 dim(svd$v) #v value after running IRLBA [1] 10 5 A modified version of the previous function can be achieved by altering the power iterations for a robust implementation: foreach( i = 1:iter )%dopar% { y1 <- m1 %*% t(b) y2 <- t(y1) %*% y1 r2 <- chol(y2, pivot = T) q1 <- y2 %*% solve(r2) b1 <- t(q1) %*% m1 } b2 <- b1 %*% t(b1) Some other functions available in R packages are as follows: Functions Package svd() svd Irlba() irlba svdImpute bcv ISOMAP – moving towards non-linearity ISOMAP is a nonlinear dimension reduction method and is representative of isometric mapping methods. ISOMAP is one of the approaches for manifold learning. ISOMAP finds the map that preserves the global, nonlinear geometry of the data by preserving the geodesic manifold inter-point distances. Like multi-dimensional scaling, ISOMAP creates a visual presentation of distance of a number of objects. Geodesic is the shortest curve along the manifold connecting two points induced by a neighborhood graph. Multi-dimensional scaling uses the Euclidian distance measure; since the data is in a nonlinear format, ISOMPA uses geodesic distance. ISOMAP can be viewed as an extension of metric multi-dimensional scaling. At a very high level, ISOMAP can be describes in four steps: Determine the neighbor of each point Construct a neighborhood graph Compute the shortest distance path between all pairs Construct k-dimensional coordinate vectors by applying MDS Geodesic distance approximation is basically calculated in three ways: Neighboring points: input-space distance Faraway points: a sequence of short hops between neighboring points Method: Finding shortest paths in a graph with edges connecting neighboring data points source("http://bioconductor.org/biocLite.R") biocLite("RDRToolbox") library('RDRToolbox') swiss_Data=SwissRoll(N = 1000, Plot=TRUE) x=SwissRoll() open3d() plot3d(x, col=rainbow(1050)[-c(1:50)],box=FALSE,type="s",size=1) simData_Iso = Isomap(data=swiss_Data, dims=1:10, k=10,plotResiduals=TRUE) library(vegan)data(BCI) distance <- vegdist(BCI) tree <- spantree(dis) pl1 <- ordiplot(cmdscale(dis), main="cmdscale") lines(tree, pl1, col="red") z <- isomap(distance, k=3) rgl.isomap(z, size=4, color="red") pl2 <- plot(isomap(distance, epsilon=0.5), main="isomap epsilon=0.5") pl3 <- plot(isomap(distance, k=5), main="isomap k=5") pl4 <- plot(z, main="isomap k=3") Summary The idea of this article was to get you familiar with some of the generic dimensionality reduction methods and their implementation using R language. We discussed a few packages that provide functions to perform these tasks. We also covered a few custom functions that can be utilized to perform these tasks. Kudos, you have completed the basics of text mining with R. You must be feeling confident about various data mining methods, text mining algorithms (related to natural language processing of the texts) and after reading this article, dimensionality reduction. If you feel a little low on confidence, do not be upset. Turn a few pages back and try implementing those tiny code snippets on your own dataset and figure out how they help you understand your data. Remember this - to mine something, you have to get into it by yourself. This holds true for text as well. Resources for Article: Further resources on this subject: Data Science with R [Article] Machine Learning with R [Article] Data mining [Article]
Read more
  • 0
  • 0
  • 4433
article-image-cooking-cupcakes-towers
Packt
03 Jan 2017
6 min read
Save for later

Cooking cupcakes towers

Packt
03 Jan 2017
6 min read
In this article by Francesco Sapio author of the book Getting Started with Unity 2D Game Development - Second Edition we will see how to create our towers. This is not an easy task, but at the end we will acquire a lot of scripting skills. (For more resources related to this topic, see here.) What a cupcake Tower does First of all, it's useful to write down what we want to achieve and define what exactly a cupcake tower is supposed to do. The best way is to write down a list, to have clear idea of what we are trying to achieve: A cupcake tower is able to detect pandas within a certain range. A cupcake tower shoots a different kind of projectile according to its typology against the pandas within a certain range. Furthermore, among this range, it uses a policy to decide which panda to shoot. There is a reload time, before the cupcake tower is able to shoot again. The cupcake tower can be upgraded (in a bigger cupcake!), increasing its stats and therefore changing its appearance. Scripting the cupcake tower There are a lot of things to implement. Let's start by creating a new script and naming it CupcakeTowerScript. As we already mentioned for the Projectile Script, in this article, we implement the main logic, but of course there is always space to improve. Shooting to pandas Even if we don't have enemies yet, we can already start to program the behavior of the cupcake towers to shoot to the enemies. In this article we will learn a bit about using Physics to detect objects within a range. Let's start by defining four variables. The first three are public, so we can set them in the Inspector, the last one is private, since we only need it to check how much time is elapsed. In particular, the first three variables store the parameters of our tower. So, the projectile prefab, its range and its reload time. We can write the following: public float rangeRadius; //Maximum distance that the Cupcake Tower can shoot public float reloadTime; //Time before the Cupcake Tower is able to shoot again public GameObject projectilePrefab; //Projectile type that is fired from the Cupcake Tower private float elapsedTime; //Time elapsed from the last time the Cupcake Tower has shot Now, in the Update() function we need to check if enough time has elapsed in order to shoot. This can be easily done by using an if-statement. In any case, at the end, the time elapsed should be increased: void Update () { if (elapsedTime >= reloadTime) { //Rest of the code } elapsedTime += Time.deltaTime; } Within the if statement, we need to reset the elapsed time, so to be able to shoot the next time. Then, we need to check if within its range there are some game objects or not. if (elapsedTime >= reloadTime) { //Reset elapsed Time elapsedTime = 0; //Find all the gameObjects with a collider within the range of the Cupcake Tower Collider2D[] hitColliders = Physics2D.OverlapCircleAll(transform.position, rangeRadius); //Check if there is at least one gameObject found if (hitColliders.Length != 0) { //Rest of the code } } If there are enemies within range, we need to decide a policy about which enemy the tower should be targeted. There are different ways to do this and different strategies that the tower itself could choose. Here, we are going to implement one where the nearest enemy to the tower will be the one targeted. To implement this policy, we need to loop all all the game objects that we have found in range, check if they actually are enemies, and using distances, pick the nearest one. To achieve this, write the following code inside the previous if statement: if (hitColliders.Length != 0) { //Loop over all the gameObjects to identify the closest to the Cupcake Tower float min = int.MaxValue; int index = -1; for (int i = 0; i < hitColliders.Length; i++) { if (hitColliders[i].tag == "Enemy") { float distance = Vector2.Distance(hitColliders[i].transform.position, transform.position); if (distance < min) { index = i; min = distance; } } } if (index == -1) return; //Rest of the code } Once we got the target, we need to get the direction, that the tower will use to throw the projectile. So, let's write this: //Get the direction of the target Transform target = hitColliders[index].transform; Vector2 direction = (target.position - transform.position).normalized; Finally, we need to instantiate a new Projectile, and assign to it the direction of the enemy, as the following: //Create the Projectile GameObject projectile = GameObject.Instantiate(projectilePrefab, transform.position, Quaternion.identity) as GameObject; projectile.GetComponent<ProjectileScript>().direction = direction; Instantiate Game Objects it is usually slow, and it should be avoided. However, for the learning propose we can live with that. And that is it for shooting to the enemies. Upgrading the cupcake tower, making it even tastier In order to create a function to upgrade the tower, we first need to define a variable to store the actual level of the tower: public int upgradeLevel; //Level of the Cupcake Tower Then, we need an array with all the Sprites for the different upgrades, like the following: public Sprite[] upgradeSprites; //Different sprites for the different levels of the Cupcake Tower Finally, we can create our Upgrade function. We need to upgrade the graphics, and increase the stats. Feel free to tweak this values as you prefer. However, don't forget to increase the level of the tower as well as to assign the new sprite. At the end, you should have something like the following: public void Upgrade() { rangeRadius += 1f; reloadTime -= 0.5f; upgradeLevel++; GetComponent<SpriteRenderer>().sprite = upgradeSprites[upgradeLevel]; } Save the script, and for now we have done with it. A pre-cooked cupcake tower through Prefabs As we have done with the Sprinkle, we need to do something similar for the cupcake Tower. In the Prefabs folder in the Project Panel, create a new Prefab by right clicking and then navigate to Create | Prefab. Name it SprinklesCupcakeTower. Now, drag and drop the Sprinkles_Cupcake_Tower_0 from the Graphics/towers folder (within the cupcake_tower_sheet-01 file) in the Scene View. Attach the CupcakeTowerScript to the object by navigating to Add Component | Script | CupcakeTowerScript. The Inspector should look like the following: We need to assign the Pink_Sprinkle_Projectile_Prefab to the Projectile Prefab variable. Then, we need to assign the different Sprites for the upgrades. In particular, we can use Sprinkles_Cupcake_Tower_* (replacing the * with the level of the cupcake tower) from the same sheet as before. Don't worry too much about the other parameters of the tower, like the range radius or the reload time, since we will see how to balance the game later on. At the end, this is what we should see: The last step is to drag this game object inside the prefab. As a result, our cupcake tower is ready. Summary In this article we covered the topic of creating a cupcake tower and scripting it. Resources for Article: Further resources on this subject: Animating a Game Character [article] What's Your Input? [article] Components in Unity [article]
Read more
  • 0
  • 0
  • 21668

article-image-getting-started-spring-boot
Packt
03 Jan 2017
17 min read
Save for later

Getting Started with Spring Boot

Packt
03 Jan 2017
17 min read
In this article by, Greg Turnquist, author of the book, Learning Spring Boot – Second Edition, we will cover the following topics: Introduction Creating a bare project using http://start.spring.io Seeing how to run our app straight inside our IDE with no stand alone containers (For more resources related to this topic, see here.) Perhaps you've heard about Spring Boot? It's only cultivated the most popular explosion in software development in years. Clocking millions of downloads per month, the community has exploded since it's debut in 2013. I hope you're ready for some fun, because we are going to take things to the next level as we use Spring Boot to build a social media platform. We'll explore its many valuable features all the way from tools designed to speed up development efforts to production-ready support as well as cloud native features. Despite some rapid fire demos you might have caught on YouTube, Spring Boot isn't just for quick demos. Built atop the de facto standard toolkit for Java, the Spring Framework, Spring Boot will help us build this social media platform with lightning speed AND stability. In this article, we'll get a quick kick off with Spring Boot using Java the programming language. Maybe that makes you chuckle? People have been dinging Java for years as being slow, bulky, and not the means for agile shops. Together, we'll see how that is not the case. At any time, if you're interested in a more visual medium, feel free to checkout my Learning Spring Boot [Video] at https://www.packtpub.com/application-development/learning-spring-boot-video. What is step #1 when we get underway with a project? We visit Stack Overflow and look for an example project to build a project! Seriously, the amount of time spent adapting another project's build file, picking dependencies, and filling in other details about our project adds up. At the Spring Initializr (http://start.spring.io), we can enter minimal details about our app, pick our favorite build system, the version of Spring Boot we wish to use, and then choose our dependencies off a menu. Click the Download button, and we have a free standing, ready-to-run application. In this article, let's take a quick test drive and build small web app. We can start by picking Gradle from the dropdown. Then, select 1.4.1.BUILD-SNAPSHOT as the version of Spring Boot we wish to use. Next, we need to pick our application's coordinates: Group - com.greglturnquist.learningspringboot Artifact - learning-spring-boot Now comes the fun part. We get to pick the ingredients for our application like picking off a delicious menu. If we start typing, for example, "Web", into the Dependencies box, we'll see several options appear. To see all the available options, click on the Switch to the full version link toward the bottom. There are lots of overrides, such as switching from JAR to WAR, or using an older version of Java. You can also pick Kotlin or Groovy as the primary language for your application. For starters, in this day and age, there is no reason to use anything older than Java 8. And JAR files are the way to go. WAR files are only needed when applying Spring Boot to an old container. To build our social media platform, we need a few ingredients as shown: Web (embedded Tomcat + Spring MVC) WebSocket JPA (Spring Data JPA) H2 (embedded SQL data store) Thymeleaf template engine Lombok (to simplify writing POJOs) The following diagram shows an overview of these ingredients: With these items selected, click on Generate Project. There are LOTS of other tools that leverage this site. For example, IntelliJ IDEA lets you create a new project inside the IDE, giving you the same options shown here. It invokes the web site's REST API, and imports your new project. You can also interact with the site via cURL or any other REST-based tool. Now let's unpack that ZIP file and see what we've got: a build.gradle build file a Gradle wrapper, so there's no need to install Gradle a LearningSpringBootApplication.java application class an application.properties file a LearningSpringBootApplicationTests.java test class We built an empty Spring Boot project. Now what? Before we sink our teeth into writing code, let's take a peek at the build file. It's quite terse, but carries some key bits. Starting from the top: buildscript { ext { springBootVersion = '1.4.1.BUILD-SNAPSHOT' } repositories { mavenCentral() maven { url "https://repo.spring.io/snapshot" } maven { url "https://repo.spring.io/milestone" } } dependencies { classpath("org.springframework.boot:spring-boot-gradle- plugin:${springBootVersion}") } } This contains the basis for our project: springBootVersion shows us we are using Spring Boot 1.4.1.BUILD-SNAPSHOT The Maven repositories it will pull from are listed next Finally, we see the spring-boot-gradle-plugin, a critical tool for any Spring Boot project The first piece, the version of Spring Boot, is important. That's because Spring Boot comes with a curated list of 140 third party library versions extending well beyond the Spring portfolio and into some of the most commonly used libraries in the Java ecosystem. By simply changing the version of Spring Boot, we can upgrade all these libraries to newer versions known to work together. There is an extra project, the Spring IO Platform (https://spring.io/platform), which includes an additional 134 curated versions, bringing the total to 274. The repositories aren't as critical, but it's important to add milestones and snapshots if fetching a library not released to Maven central or hosted on some vendor's local repository. Thankfully, the Spring Initializr does this for us based on the version of Spring Boot selected on the site. Finally, we have the spring-boot-gradle-plugin (and there is a corresponding spring-boot-maven-plugin for Maven users). This plugin is responsible for linking Spring Boot's curated list of versions with the libraries we select in the build file. That way, we don't have to specify the version number. Additionally, this plugin hooks into the build phase and bundle our application into a runnable über JAR, also known as a shaded or fat JAR. Java doesn't provide a standardized way to load nested JAR files into the classpath. Spring Boot provides the means to bundle up third-party JARs inside an enclosing JAR file and properly load them at runtime. Read more at http://docs.spring.io/spring-boot/docs/1.4.1.BUILD-SNAPSHOT/reference/htmlsingle/#executable-jar. With an über JAR in hand, we only need put it on a thumb drive, and carry it to another machine, to a hundred virtual machines in the cloud or your data center, or anywhere else, and it simply runs where we can find a JVM. Peeking a little further down in build.gradle, we can see the plugins that are enabled by default: apply plugin: 'java' apply plugin: 'eclipse' apply plugin: 'spring-boot' The java plugin indicates the various tasks expected for a Java project The eclipse plugin helps generate project metadata for Eclipse users The spring-boot plugin is where the actual spring-boot-gradle-plugin is activated An up-to-date copy of IntelliJ IDEA can ready a plain old Gradle build file fine without extra plugins. Which brings us to the final ingredient used to build our application: dependencies. Spring Boot starters No application is complete without specifying dependencies. A valuable facet of Spring Boot are its virtual packages. These are published packages that don't contain any code, but instead simply list other dependencies. The following list shows all the dependencies we selected on the Spring Initializr site: dependencies { compile('org.springframework.boot:spring-boot-starter-data-jpa') compile('org.springframework.boot:spring-boot-starter- thymeleaf') compile('org.springframework.boot:spring-boot-starter-web') compile('org.springframework.boot:spring-boot-starter- websocket') compile('org.projectlombok:lombok') runtime('com.h2database:h2') testCompile('org.springframework.boot:spring-boot-starter-test') } If you'll notice, most of these packages are Spring Boot starters: spring-boot-starter-data-jpa pulls in Spring Data JPA, Spring JDBC, Spring ORM, and Hibernate spring-boot-starter-thymeleaf pulls in Thymeleaf template engine along with Spring Web and the Spring dialect of Thymeleaf spring-boot-starter-web pulls in Spring MVC, Jackson JSON support, embedded Tomcat, and Hibernate's JSR-303 validators spring-boot-starter-websocket pulls in Spring WebSocket and Spring Messaging These starter packages allow us to quickly grab the bits we need to get up and running. Spring Boot starters have gotten so popular that lots of other third party library developers are crafting their own. In addition to starters, we have three extra libraries: Project Lombok makes it dead simple to define POJOs without getting bogged down in getters, setters, and others details. H2 is an embedded database allowing us to write tests, noodle out solutions, and get things moving before getting involved with an external database. spring-boot-starter-test pulls in Spring Boot Test, JSON Path, JUnit, AssertJ, Mockito, Hamcrest, JSON Assert, and Spring Test, all within test scope. The value of this last starter, spring-boot-starter-test, cannot be overstated. With a single line, the most powerful test utilities are at our fingertips, allowing us to write unit tests, slice tests, and full blown our-app-inside-embedded-Tomcat tests. It's why this starter is included in all projects without checking a box on the Spring Initializr site. Now to get things off the ground, we need to shift focus to the tiny bit of code written for us by the Spring Initializr. Running a Spring Boot application The fabulous http://start.spring.io website created a tiny class, LearningSpringBootApplication as shown in the following code: package com.greglturnquist.learningspringboot; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class LearningSpringBootApplication { public static void main(String[] args) { SpringApplication.run( LearningSpringBootApplication.class, args); } } This tiny class is actually a fully operational web application! The @SpringBootApplication annotation tells Spring Boot, when launched, to scan recursively for Spring components inside this package and register them. It also tells Spring Boot to enable autoconfiguration, a process where beans are automatically created based on classpath settings, property settings, and other factors. Finally, it indicates that this class itself can be a source for Spring bean definitions. It holds a public static void main(), a simple method to run the application. There is no need to drop this code into an application server or servlet container. We can just run it straight up, inside our IDE. The amount of time saved by this feature, over the long haul, adds up fast. SpringApplication.run() points Spring Boot at the leap off point. In this case, this very class. But it's possible to run other classes. This little class is runnable. Right now! In fact, let's give it a shot. . ____ _ __ _ _ /\ / ___'_ __ _ _(_)_ __ __ _ ( ( )___ | '_ | '_| | '_ / _` | \/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |___, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v1.4.1.BUILD-SNAPSHOT) 2016-09-18 19:52:44.214: Starting LearningSpringBootApplication on ret... 2016-09-18 19:52:44.217: No active profile set, falling back to defaul... 2016-09-18 19:52:44.513: Refreshing org.springframework.boot.context.e... 2016-09-18 19:52:45.785: Bean 'org.springframework.transaction.annotat... 2016-09-18 19:52:46.188: Tomcat initialized with port(s): 8080 (http) 2016-09-18 19:52:46.201: Starting service Tomcat 2016-09-18 19:52:46.202: Starting Servlet Engine: Apache Tomcat/8.5.5 2016-09-18 19:52:46.323: Initializing Spring embedded WebApplicationCo... 2016-09-18 19:52:46.324: Root WebApplicationContext: initialization co... 2016-09-18 19:52:46.466: Mapping servlet: 'dispatcherServlet' to [/] 2016-09-18 19:52:46.469: Mapping filter: 'characterEncodingFilter' to:... 2016-09-18 19:52:46.470: Mapping filter: 'hiddenHttpMethodFilter' to: ... 2016-09-18 19:52:46.470: Mapping filter: 'httpPutFormContentFilter' to... 2016-09-18 19:52:46.470: Mapping filter: 'requestContextFilter' to: [/*] 2016-09-18 19:52:46.794: Building JPA container EntityManagerFactory f... 2016-09-18 19:52:46.809: HHH000204: Processing PersistenceUnitInfo [ name: default ...] 2016-09-18 19:52:46.882: HHH000412: Hibernate Core {5.0.9.Final} 2016-09-18 19:52:46.883: HHH000206: hibernate.properties not found 2016-09-18 19:52:46.884: javassist 2016-09-18 19:52:46.976: HCANN000001: Hibernate Commons Annotations {5... 2016-09-18 19:52:47.169: Using dialect: org.hibernate.dialect.H2Dialect 2016-09-18 19:52:47.358: HHH000227: Running hbm2ddl schema export 2016-09-18 19:52:47.359: HHH000230: Schema export complete 2016-09-18 19:52:47.390: Initialized JPA EntityManagerFactory for pers... 2016-09-18 19:52:47.628: Looking for @ControllerAdvice: org.springfram... 2016-09-18 19:52:47.702: Mapped "{[/error]}" onto public org.springfra... 2016-09-18 19:52:47.703: Mapped "{[/error],produces=[text/html]}" onto... 2016-09-18 19:52:47.724: Mapped URL path [/webjars/**] onto handler of... 2016-09-18 19:52:47.724: Mapped URL path [/**] onto handler of type [c... 2016-09-18 19:52:47.752: Mapped URL path [/**/favicon.ico] onto handle... 2016-09-18 19:52:47.778: Cannot find template location: classpath:/tem... 2016-09-18 19:52:48.229: Registering beans for JMX exposure on startup 2016-09-18 19:52:48.278: Tomcat started on port(s): 8080 (http) 2016-09-18 19:52:48.282: Started LearningSpringBootApplication in 4.57... Scrolling through the output, we can see several things: The banner at the top gives us a readout of the version of Spring Boot. (BTW, you can create your own ASCII art banner by creating either banner.txt or banner.png into src/main/resources/) Embedded Tomcat is initialized on port 8080, indicating it's ready for web requests Hibernate is online with the H2 dialect enabled A few Spring MVC routes are registered, such as /error, /webjars, a favicon.ico, and a static resource handler And the wonderful Started LearningSpringBootApplication in 4.571 seconds message Spring Boot uses embedded Tomcat, so there's no need to install a container on our target machine. Non-web apps don't even require Apache Tomcat. The JAR itself is the new container that allows us to stop thinking in terms of old fashioned servlet containers. Instead, we can think in terms of apps. All these factors add up to maximum flexibility in application deployment. How does Spring Boot use embedded Tomcat among other things? As mentioned earlier, it has autoconfiguration meaning it has Spring beans that are created based on different conditions. When Spring Boot sees Apache Tomcat on the classpath, it creates an embedded Tomcat instance along with several beans to support that. When it spots Spring MVC on the classpath, it creates view resolution engines, handler mappers, and a whole host of other beans needed to support that, letting us focus on coding custom routes. With H2 on the classpath, it spins up an in-memory, embedded SQL data store. Spring Data JPA will cause Spring Boot to craft an EntityManager along with everything else needed to start speaking JPA, letting us focus on defining repositories. At this stage, we have a running web application, albeit an empty one. There are no custom routes and no means to handle data. But we can add some real fast. Let's draft a simple REST controller: package com.greglturnquist.learningspringboot; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.bind.annotation.RestController; @RestController public class HomeController { @GetMapping public String greeting(@RequestParam(required = false, defaultValue = "") String name) { return name.equals("") ? "Hey!" : "Hey, " + name + "!"; } } Let's examine this tiny REST controller in detail: The @RestController annotation indicates that we don't want to render views, but instead write the results straight into the response body. @GetMapping is Spring's shorthand annotation for @RequestMapping(method = RequestMethod.GET, …[]). In this case, it defaults the route to "/". Our greeting() method has one argument: @RequestParam(required = false, defaultValue = "") String name. It indicates that this value can be requested via an HTTP query (?name=Greg), the query isn't required, and in case it's missing, supply an empty string. Finally, we are returning one of two messages depending on whether or not name is empty using Java's classic ternary operator. If we re-launch the LearningSpringBootApplication in our IDE, we'll see a new entry in the console. 2016-09-18 20:13:08.149: Mapped "{[],methods=[GET]}" onto public java.... We can then ping our new route in the browser at http://localhost:8080 and http://localhost:8080?name=Greg. Try it out! That's nice, but since we picked Spring Data JPA, how hard would it be to load some sample data and retrieve it from another route? (Spoiler alert: not hard at all.) We can start out by defining a simple Chapter entity to capture book details as shown in the following code: package com.greglturnquist.learningspringboot; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.Id; import lombok.Data; @Data @Entity public class Chapter { @Id @GeneratedValue private Long id; private String name; private Chapter() { // No one but JPA uses this. } public Chapter(String name) { this.name = name; } } This little POJO let's us the details about the chapter of a book as follows: The @Data annotation from Lombok will generate getters, setters, a toString() method, a constructor for all required fields (those marked final), an equals() method, and a hashCode() method. The @Entity annotation flags this class as suitable for storing in a JPA data store. The id field is marked with JPA's @Id and @GeneratedValue annotations, indicating this is the primary key, and that writing new rows into the corresponding table will create a PK automatically. Spring Data JPA will by default create a table named CHAPTER with two columns, ID, and NAME. The key field is name, which is populated by the publicly visible constructor. JPA requires a no-arg constructor, so we have included one, but marked it private so no one but JPA may access it. To interact with this entity and it's corresponding table in H2, we could dig in and start using the autoconfigured EntityManager supplied by Spring Boot. By why do that, when we can declare a repository-based solution? To do so, we'll create an interface defining the operations we need. Check out this simple interface: package com.greglturnquist.learningspringboot; import org.springframework.data.repository.CrudRepository; public interface ChapterRepository extends CrudRepository<Chapter, Long> { } This declarative interface creates a Spring Data repository as follows: CrudRepository extends Repository, a Spring Data Commons marker interface that signals Spring Data to create a concrete implementation while also capturing domain information. CrudRepository, also from Spring Data Commons, has some pre-defined CRUD operations (save, delete, deleteAll, findOne, findAll). It specifies the entity type (Chapter) and the type of the primary key (Long). Spring Data JPA will automatically wire up a concrete implementation of our interface. Spring Data doesn't engage in code generation. Code generation has a sordid history of being out of date at some of the worst times. Instead, Spring Data uses proxies and other mechanisms to support all these operations. Never forget - the code you don't write has no bugs. With Chapter and ChapterRepository defined, we can now pre-load the database, as shown in the following code: package com.greglturnquist.learningspringboot; import org.springframework.boot.CommandLineRunner; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; @Configuration public class LoadDatabase { @Bean CommandLineRunner init(ChapterRepository repository) { return args -> { repository.save( new Chapter("Quick start with Java")); repository.save( new Chapter("Reactive Web with Spring Boot")); repository.save( new Chapter("...and more!")); }; } } This class will be automatically scanned in by Spring Boot and run in the following way: @Configuration marks this class as a source of beans. @Bean indicates that the return value of init() is a Spring Bean. In this case, a CommandLineRunner. Spring Boot runs all CommandLineRunner beans after the entire application is up and running. This bean definition is requesting a copy of the ChapterRepository. Using Java 8's ability to coerce the args → {} lambda function into a CommandLineRunner, we are able to write several save() operations, pre-loading our data. With this in place, all that's left is write a REST controller to serve up the data! package com.greglturnquist.learningspringboot; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RestController; @RestController public class ChapterController { private final ChapterRepository repository; public ChapterController(ChapterRepository repository) { this.repository = repository; } @GetMapping("/chapters") public Iterable<Chapter> listing() { return repository.findAll(); } } This controller is able to serve up our data as follows: @RestController indicates this is another REST controller. Constructor injection is used to automatically load it with a copy of the ChapterRepository. With Spring, if there is a only one constructor call, there is no need to include an @Autowired annotation. @GetMapping tells Spring that this is the place to route /chapters calls. In this case, it returns the results of the findAll() call found in CrudRepository. If we re-launch our application and visit http://localhost:8080/chapters, we can see our pre-loaded data served up as a nicely formatted JSON document: It's not very elaborate, but this small collection of classes has helped us quickly define a slice of functionality. And if you'll notice, we spent zero effort configuring JSON converters, route handlers, embedded settings, or any other infrastructure. Spring Boot is designed to let us focus on functional needs, not low level plumbing. Summary So in this article we introduced the Spring Boot concept in brief and we rapidly crafted a Spring MVC application using the Spring stack on top of Apache Tomcat with little configuration from our end. Resources for Article: Further resources on this subject: Writing Custom Spring Boot Starters [article] Modernizing our Spring Boot app [article] Introduction to Spring Framework [article]
Read more
  • 0
  • 0
  • 14149

article-image-getting-started-aurelia
Packt
03 Jan 2017
28 min read
Save for later

Getting Started with Aurelia

Packt
03 Jan 2017
28 min read
In this article by Manuel Guilbault, the author of the book Learning Aurelia, we will how Aurelia is such a modern framework. brainchild of Rob Eisenberg, father of Durandal, it is based on cutting edge Web standards, and is built on modern software architecture concepts and ideas, to offer a powerful toolset and an awesome developer experience. (For more resources related to this topic, see here.) This article will teach you how Aurelia works, and how you can use it to build real-world applications from A to Z. In fact, while reading the article and following the examples, that’s exactly what you will do. You will start by setting up your development environment and creating the project, then I will walk you through concepts such as routing, templating, data-binding, automated testing, internationalization, and bundling. We will discuss application design, communication between components, and integration of third parties. We will cover every topic most modern, real-world single-page applications require. In this first article, we will start by defining some terms that will be used throughout the article. We will quickly cover some core Aurelia concepts. Then we will take a look at the core Aurelia libraries and see how they interact with each other to form a complete, full-featured framework. We will see also what tools are needed to develop an Aurelia application and how to install them. Finally, we will start creating our application and explore its global structure. Terminology As this article is about a JavaScript framework, JavaScript plays a central role in it. If you are not completely up to date with the terminology, which has changed a lot in the last few years, let me clear things up. JavaScript (or JS) is a dialect, or implementation, of the ECMAScript (ES) standard. It is not the only implementation, but it definitely is the most popular. In this article, I will use the JS acronym to talk about actual JavaScript code or code files and the ES acronym when talking about an actual version of the ECMAScript standard. Like everything in computer programing, the ECMAScript standard evolves over time. At the moment of writing, the latest version is ES2016 and was published in June 2016. It was originally called ES7, but TC39, the committee drafting the specification, decided to change their approval and naming model, hence the new name. The previous version, named ES2015 (ES6) before the naming model changed, was published in June 2015 and was a big step forward as compared to the version before it. This older version, named ES5, was published in 2009 and was the most recent version for 6 years, so it is now widely supported by all modern browsers. If you have been writing JavaScript in the last five years, you should be familiar with ES5. When they decided to change the ES naming model, the TC39 committee also chose to change the specification’s approval model. This decision was made in an effort to publish new versions of the language at a quicker pace. As such, new features are being drafted and discussed by the community, and must pass through an approval process. Each year, a new version of the specification will be released, comprising the features that were approved during the year. Those upcoming features are often referred to as ESNext. This term encompasses language features that are approved or at least pretty close to approval but not yet published. It can be reasonable to expect that most or at least some of those features will be published in the next language version. As ES2015 and ES2016 are still recent things, they are not fully supported by most browsers. Moreover, ESNext features have typically no browser support at all. Those multiple names can be pretty confusing. To make things simpler, I will stick with the official names ES5 for the previous version, ES2016 for the current version and ESNext for the next version. Before going any further, you should make yourself familiar with the features introduced by ES2016 and with ESNext decorators, if you are not already. We will use these features throughout the article. If you don’t know where to start with ES2015 and ES2016, you can find a great overview of the new features on Babel’s website: https://babeljs.io/docs/learn-es2015/ As for ESNext decorators, Addy Osmani, a Google engineer, explained them pretty well: https://medium.com/google-developers/exploring-es7-decorators-76ecb65fb841 For further reading, you can take a look at the feature proposals (decorators, class property declarations, async functions, and so on) for future ES versions: https://github.com/tc39/proposals Core concepts Before we start getting our hands dirty, there are a couple of core concepts that need to be explained. Conventions First, Aurelia relies a lot on conventions. Most of those conventions are configurable, and can be changed if they don’t suit your needs. Each time we’ll encounter a convention throughout the article, we will see how to change it whenever possible. Components Components are a first class citizen of Aurelia. What is an Aurelia component? It is a pair made of an HTML template, called the view, and a JavaScript class, called the view-model. The view is responsible for displaying the component, while the view-model controls its data and behavior. Typically, the view sits in an .html file and the view-model in a .js file. By convention, those two files are bound through a naming rule, they must be in the same directory and have the same name (except for their extension, of course). Here’s an example of an empty component with no data, no behavior, and a static template: component.js export class MyComponent {} component.html <template> <p>My component</p> </template> A component must comply with two constraints, a view’s root HTML element must be the template element, and the view-model class must be exported from the .js file. As a rule of thumb, the only function that should be exported by a component’s JS file should be the view-model class. If multiple classes or functions are exported, Aurelia will iterate on the file’s exported functions and classes and will use the first it finds as the view-model. However, since the enumeration order of an object’s keys is not deterministic as per the ES specification, nothing guarantees that the exports will be iterated in the same order they were declared, so Aurelia may pick the wrong class as the component’s view-model. The only exception to that rule is some view resources In addition to its view-model class, a component’s JS file can export things like value converters, binding behaviors, and custom attributes basically any view resource that can’t have a view, which excludes custom elements. Components are the main building blocks of an Aurelia application. Components can use other components; they can be composed to form bigger or more complex components. Thanks to the slot mechanism, you can design a component’s template so parts of it can be replaced or customized. Architecture Aurelia is not your average monolithic framework. It is a set of loosely coupled libraries with well-defined abstractions. Each of its core libraries solves a specific and well-defined problem common to single-page applications. Aurelia leverages dependency injection and a plugin architecture so you can discard parts of the framework and replace them with third-party or even your own implementations. Or you can just throw away features you don’t need so your application is lighter and faster to load. The core Aurelia libraries can be divided into multiple categories. Let’s have a quick glance. Core features The following libraries are mostly independent and can be used by themselves if needed. They each provide a focused set of features and are at the core of Aurelia: aurelia-dependency-injection: A lightweight yet powerful dependency injection container. It supports multiple lifetime management strategies and child containers. aurelia-logging: A simple logger, supporting log levels and pluggable consumers. aurelia-event-aggregator: A lightweight message bus, used for decoupled communication. aurelia-router: A client-side router, supporting static, parameterized or wildcard routes, and child routers. aurelia-binding: An adaptive and pluggable data-binding library. aurelia-templating: An extensible HTML templating engine. Abstraction layers The following libraries mostly define interfaces and abstractions in order to decouple concerns and enable extensibility and pluggable behaviors. This does not mean that some of the libraries in the previous section do not expose their own abstractions besides their features. Some of them do. But the libraries described in the current section have almost no other purpose than defining abstractions: aurelia-loader: An abstraction defining an interface for loading JS modules, views, and other resources. aurelia-history: An abstraction defining an interface for history management used by routing. aurelia-pal: An abstraction for platform-specific capabilities. It is used to abstract away the platform on which the code is running, such as a browser or Node.js. Indeed, this means that some Aurelia libraries can be used on the server side. Default implementations The following libraries are the default implementations of abstractions exposed by libraries from the two previous sections: aurelia-loader-default: An implementation of the aurelia-loader abstraction for SystemJS and require-based loaders. aurelia-history-browser: An implementation of the aurelia-history abstraction based on standard browser hash change and push state mechanisms. aurelia-pal-browser: An implementation of the aurelia-pal abstraction for the browser. aurelia-logging-console: An implementation of the aurelia-logging abstraction for the browser console. Integration layers The following libraries’ purpose is to integrate some of the core libraries together. They provide interface implementations and adapters, along with default configuration or behaviors: aurelia-templating-router: An integration layer between the aurelia-router and the aurelia-templating libraries. aurelia-templating-binding: An integration layer between the aurelia-templating and the aurelia-binding libraries. aurelia-framework: An integration layer that brings together all of the core Aurelia libraries into a full-featured framework. aurelia-bootstrapper: An integration layer that brings default configuration for aurelia-framework and handles application starting. Additional tools and plugins If you take a look at Aurelia’s organization page on GitHub at https://github.com/aurelia, you will see many more repositories. The libraries listed in the previous sections are just the core of Aurelia—the tip of the iceberg, if I may. Many other libraries exposing additional features or integrating third-party libraries are available on GitHub, some of them developed and maintained by the Aurelia team, many others by the community. I strongly suggest that you explore the Aurelia ecosystem by yourself after reading this article, as it is rapidly growing, and the Aurelia community is doing some very exciting things. Tooling In the following section, we will go over the tools needed to develop our Aurelia application. Node.js and NPM Aurelia being a JavaScript framework, it just makes sense that its development tools are also in JavaScript. This means that the first thing you need to do when getting started with Aurelia is to install Node.js and NPM on your development environment. Node.js is a server-side runtime environment based on Google’s V8 JavaScript engine. It can be used to build complete websites or web APIs, but it is also used by a lot of front-end projects to perform development and build tasks, such as transpiling, linting, and minimizing. NPM is the de facto package manager for Node.js. It uses http://www.npmjs.com as its main repository, where all available packages are stored. It is bundled with Node.js, so if you install Node.js on your computer, NPM will also be installed. To install Node.js and NPM on your development environment, you simply need to go to https://nodejs.org/ and download the proper installer suiting your environment. If Node.js and NPM are already installed, I strongly recommend that you make sure to use at least the version 3 of NPM, as older versions may have issues collaborating with some of the other tools we’ll use. If you are not sure which version you have, you can check it by running the following command in a console: > npm –v If Node.js and NPM are already installed but you need to upgrade NPM, you can do so by running the following command: > npm install npm -g The Aurelia CLI Even though an Aurelia application can be built using any package manager, build system, or bundler you want, the preferred tool to manage an Aurelia project is the command line interface, a.k.a. the CLI. At the moment of writing, the CLI only supports NPM as its package manager and requirejs as its module loader and bundler, probably because they both are the most mature and stable. It also uses Gulp 4 behind the scene as its build system. CLI-based applications are always bundled when running, even in development environments. This means that the performance of an application during development will be very close to what it should be like in production. This also means that bundling is a recurring concern, as new external libraries must be added to some bundle in order to be available at runtime. In this article, we’ll stick with the preferred solution and use the CLI. There are however two appendices at the end of the article covering alternatives, a first for Webpack, and a second for SystemJS with JSPM. Installing the CLI The CLI being a command line tool, it should be installed globally, by opening a console and executing the following command: > npm install -g aurelia-cli You may have to run this command with administrator privileges, depending on your environment. If you already have it installed, make sure you have the latest version, by running the following command: > au -v You can then compare the version this command outputs with the latest version number tagged on GitHub, at https://github.com/aurelia/cli/releases/latest. If you don’t have the latest version, you can simply update it by running the following command: > npm install -g aurelia-cli If for some reason the command to update the CLI fails, simply uninstall then reinstall it: > npm uninstall aurelia-cli -g > npm install aurelia-cli -g This should reinstall the latest version. The project skeletons As an alternative to the CLI, project skeletons are available at https://github.com/aurelia/skeleton-navigation. This repository contains multiple sample projects, sitting on different technologies such as SystemJS with JSPM, Webpack, ASP .Net Core, or TypeScript. Prepping up a skeleton is easy. You simply need to download and unzip the archive from GitHub or clone the repository locally. Each directory contains a distinct skeleton. Depending on which one you chose, you’ll need to install different tools and run setup commands. Generally, the instructions in the skeleton’s README.md file are pretty clear. Our application Creating an Aurelia application using the CLI is extremely simple. You just need to open a console in the directory where you want to create your project and run the following command: > au new The CLI’s project creation process will start, and you should see something like this: The first thing the CLI will ask for is the name you want to give to your project. This name will be used both to create the directory in which the project will live and to set some values, such as the name property in the package.json file it will create. Let’s name our application learning-aurelia: Next, the CLI asks what technologies we want to use to develop our application. Here, you can select a custom transpiler such as TypeScript and a CSS preprocessor such as LESS or SASS. Transpiler: Little cousin of the compiler, it translates one programming language into another. In our case, it will be used to transform ESNext code, which may not be supported by all browsers, into ES5, which is understood by all modern browsers. The default choice is to use ESNext and plain CSS, and this is what we will choose: The following steps simply recap the choices we made and ask for confirmation to create the project, then ask if we want to install our project’s dependencies which it does by default. At this point, the CLI will create the project and run an npm install behind the scene. Once it completes, our application is ready to roll: At this point, the directory you ran au new in will contain a new directory named learning-aurelia. This sub-directory will contain the Aurelia project. We’ll explore it a bit in the following section. The CLI is likely to change and offer more options in the future, as there are plans to support additional tools and technologies. Don’t be surprised if you see different or new options when you run it. The path we followed to create our project uses Visual Studio Code as the default code editor. If you want to use another editor such as Atom, Sublime, or WebStorm, which are the other supported options at the moment of writing, you simply need to select option #3 custom transpilers, CSS pre-processors and more at the beginning of the creation process, then select the default answer for each question until asked to select your default code editor. The rest of the creation process should stay pretty much the same. Note that if you select a different code editor, your own experience may differ from the examples and screenshots you’ll find in this article, as Visual Studio Code is the editor that was used during writing. If you are a TypeScript developer, you may want to create a TypeScript project. I however recommend that you stick with plain ESNext, as every example and code sample in this article has been written in JS. Trying to follow with TypeScript may prove cumbersome, although you can try if you like the challenge. The Structure of a CLI-Based Project If you open the newly created project in a code editor, you should see the following file structure: node_modules: The standard NPM directory containing the project’s dependencies; src: The directory containing the application’s source code; test: The directory containing the application’s automated test suites. .babelrc: The configuration file for Babel, which is used by the CLI to transpile our application’s ESNext code into ES5 so most browsers can run it; index.html: The HTML page that loads and launches the application; karma.conf.js: The configuration file for Karma, which is used by the CLI to run unit tests; package.json: The standard Node.js project file. The directory contains other files such as .editorconfig, .eslintrc.json, and .gitignore that are of little interest to learn Aurelia, so we won’t cover them. In addition to all of this, you should see a directory named aurelia_project. This directory contains things related to the building and bundling of the application using the CLI. Let’s see what it’s made of. The aurelia.json file The first thing of importance in this directory is a file named aurelia.json. This file contains the configuration used by the CLI to test, build, and bundle the application. This file can change drastically depending on the choices you make during the project creation process. There are very few scenarios where this file needs to be modified by hand. Adding an external library to the application is such a scenario. Apart from this, this file should mostly never be updated manually. The first interesting section in this file is the platform: "platform": { "id": "web", "displayName": "Web", "output": "scripts", "index": "index.html" }, This section tells the CLI that the output directory where the bundles are written is named scripts. It also tells that the HTML index page, which will load and launch the application, is the index.html file. The next interesting part is the transpiler section: "transpiler": { "id": "babel", "displayName": "Babel", "fileExtension": ".js", "options": { "plugins": [ "transform-es2015-modules-amd" ] }, "source": "src/**/*.js" }, This section tells the CLI to transpile the application’s source code using Babel. It also defines additional plugins as some are already configured in .babelrc to be used when transpiling the source code. In this case, it adds a plugin that will output transpiled files as AMD-compliant modules, for requirejs compatibility. Tasks The aurelia_project directory contains a subdirectory named tasks. This subdirectory contains various Gulp tasks to build, run, and test the application. These tasks can be executed using the CLI. The first thing you can try is to run au without any argument: > au This will list all available commands, along with their available arguments. This list includes built-in commands such as new, which we used already, or generate, which we’ll see in the next section along with the Gulp tasks declared in the tasks directory. To run one of those tasks, simply execute au with the name of the task as its first argument: > au build This command will run the build task which is defined in aurelia_project/tasks/build.js. This task transpiles the application code using Babel, executes the CSS and markup preprocessors if any, and bundles the code in the scripts directory. After running it, you should see two new files in scripts: app-bundle.js and vendor-bundle.js. Those are the actual files that will be loaded by index.html when the application is launched. The former contains all application code both JS files and templates, while the later contains all external libraries used by the application including Aurelia libraries. You may have noticed a command named run in the list of available commands. This task is defined in aurelia_project/tasks/run.js, and executes the build task internally before spawning a local HTTP server to serve the application: > au run By default, the HTTP server will listen for requests on the port 9000, so you can open your favorite browser and go to http://localhost:9000/ to see the default, demo application in action. If you ever need to change the port number on which the development HTTP server runs, you just need to open aurelia_project/tasks/run.js, and locate the call to the browserSync function. The object passed to this function contains a property named port. You can change its value accordingly. The run task can accept a --watch switch: > au run --watch If this switch is present, the task will keep monitoring the source code and, when any code file changes, will rebuild the application and automatically refresh the browser. This can be pretty useful during development. Generators The CLI also offers a way to generate code, using classes defined in the aurelia_project/generators directory. At the moment of writing, there are generators to create custom attributes, custom elements, binding behaviors, value converters, and even tasks and generators, yes, there is a generator to generate generators. If you are not familiar with Aurelia at all, most of those concepts, value converters, binding behaviors, and custom attributes and elements probably mean nothing to you. Don’t worry. A generator can be executed using the built-in generate command: > au generate attribute This command will run the custom attribute generator. It will ask for the name of the attribute to generate then create it in the src/resources/attributes directory. If you take a look at this generator which is found in aurelia_project/generators/attribute.js, you’ll see that the file exports a single class named AttributeGenerator. This class uses the @inject decorator to declare various classes from the aurelia-cli library as dependencies and have instances of them injected in its constructor. It also defines an execute method, which is called by the CLI when running the generator. This method leverages the services provided by aurelia-cli to interact with the user and generate code files. The exact generator names available by default are attribute, element, binding-behavior, value-converter, task, and generator. Environments CLI-based applications support environment-specific configuration values. By default, the CLI supports three environments—development, staging, and production. The configuration object for each of these environments can be found in the different files dev.js, stage.js, and prod.js located in the aurelia_project/environments directory. A typical environment file looks like this: aurelia_project/environments/dev.js export default { debug: true, testing: true }; By default, the environment files are used to enable debugging logging and test-only templating features in the Aurelia framework depending on the environment we’ll see this in a next section. The environment objects can however be enhanced with whatever properties you may need. Typically, it could be used to configure different URLs for a backend, depending on the environment. Adding a new environment is simply a matter of adding a file for it in the aurelia_project/environments directory. For example, you can add a local environment by creating a local.js file in the directory. Many tasks, basically build and all other tasks using it, such as run and test expect an environment to be specified using the env argument: > au build --env prod Here, the application will be built using the prod.js environment file. If no env argument is provided, dev will be used by default. When executed, the build task just copies the proper environment file to src/environment.js before running the transpiler and bundling the output. This means that src/environment.js should never be modified by hand, as it will be automatically overwritten by the build task. The Structure of an Aurelia application The previous section described the files and folders that are specific to a CLI-based project. However, some parts of the project are pretty much the same whatever the build system and package manager are. These are the more global topics we will see in this section. The hosting page The first entry point of an Aurelia application is the HTML page loading and hosting it. By default, this page is named index.html and is located at the root of the project. The default hosting page looks like this: index.html <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Aurelia</title> </head> <body aurelia-app="main"> <script src="scripts/vendor-bundle.js" data-main="aurelia-bootstrapper"></script> </body> </html> When this page loads, the script element inside the body element loads the scripts/vendor-bundle.js file, which contains requirejs itself along with definitions for all external libraries and references to app-bundle.js. When loading, requirejs checks the data-main attribute and uses its value as the entry point module. Here, aurelia-bootstrapper kicks in. The bootstrapper first looks in the DOM for elements with the aurelia-app attribute, we can find such an attribute on the body element in the default index.html file. This attribute identifies elements acting as application viewports. The bootstrapper uses the attribute’s value as the name of the application’s main module and locates the module, loads it, and renders the resulting DOM inside the element, overwriting any previous content. The application is now running. Even though the default application doesn’t illustrate this scenario, it is possible for an HTML file to host multiple Aurelia applications. It just needs to contain multiple elements with an aurelia-app attribute, each element referring to its own main module. The main module By convention, the main module referred to by the aurelia-app attribute is named main, and as such is located under src/main.js. This file is expected to export a configure function, which will be called by the Aurelia bootstrapping process and will be passed a configuration object used to configure and boot the framework. By default, the main configure function looks like this: src/main.js import environment from './environment'; export function configure(aurelia) { aurelia.use .standardConfiguration() .feature('resources'); if (environment.debug) { aurelia.use.developmentLogging(); } if (environment.testing) { aurelia.use.plugin('aurelia-testing'); } aurelia.start().then(() => aurelia.setRoot()); } The configure function starts by telling Aurelia to use its defaults configuration, and to load the resources feature. It also conditionally loads the development logging plugin based on the environment’s debug property, and the testing plugin based on the environment’s testing property. This means that, by default, both plugins will be loaded in development, while none will be loaded in production. Lastly, the function starts the framework then attaches the root component to the DOM. The start method returns a Promise, whose resolution triggers the call to setRoot. If you are not familiar with Promises in JavaScript, I strongly suggest that you look it up before going any further, as they are a core concept in Aurelia. The root component At the root of any Aurelia application is a single component, which contains everything within the application. By convention, this root component is named app. It is composed of two files—app.html, which contains the template to render the component, and app.js, which contains its view-model class. In the default application, the template is extremely simple: src/app.html <template> <h1>${message}</h1> </template> This template is made of a single h1 element, which will contain the value of the view-model’s message property as text, thanks to string interpolation. The app view-model looks like this: src/app.js export class App { constructor() { this.message = 'Hello World!'; } } This file simply exports a class having a message property containing the string “Hello World!”. This component will be rendered when the application starts. If you run the application and navigate to the application in your favorite browser, you’ll see a h1 element containing “Hello World!”. You may notice that there is no reference to Aurelia in this component’s code. In fact, the view-model is just plain ESNext and it can be used by Aurelia as is. Of course, we’re going to leverage many Aurelia features in many of our view-models later on, so most of our view-models will in fact have dependencies on Aurelia libraries, but the key point here is that you don’t have to use any Aurelia library in your view-models if you don’t want to, because Aurelia is designed to be as less intrusive as possible. Conventional bootstrapping It is possible to leave the aurelia-app attribute empty in the hosting page: <body aurelia-app> In such a case, the bootstrapping process is much simpler. Instead of loading a main module containing a configure function, the bootstrapper will simply use the framework’s default configuration and load the app component as the application root. This can be a simpler way to get started for a very simple application, as it negates the need for the src/main.js file you can simply delete it. However, it means that you are stuck with the default framework configuration. You cannot load features nor plugins. For most real-life applications, you’ll need to keep the main module, which means specifying it as the aurelia-app attribute’s value. Customizing Aurelia configuration The configure function of the main module receives a configuration object, which is used to configure the framework: src/main.js //Omitted snippet… aurelia.use .standardConfiguration() .feature('resources'); if (environment.debug) { aurelia.use.developmentLogging(); } if (environment.testing) { aurelia.use.plugin('aurelia-testing'); } //Omitted snippet… Here, the standardConfiguration() method is a simple helper that encapsulates the following: aurelia.use .defaultBindingLanguage() .defaultResources() .history() .router() .eventAggregator(); This is the default Aurelia configuration. It loads the default binding language, the default templating resources, the browser history plugin, the router plugin, and the event aggregator. This is the default set of features that a typical Aurelia application uses. All those plugins will be covered at one point or another throughout this article. All those plugins are optional except the binding language, which is needed by the templating engine. If you don’t need one, just don’t load it. In addition to the standard configuration, some plugins are loaded depending on the environment’s settings. When the environment’s debug property is true, Aurelia’s console logger is loaded using the developmentLogging() method, so traces and errors can be seen in the browser console. When the environment’s testing property is true, the aurelia-testing plugin is loaded using the plugin method. This plugin registers some resources that are useful when debugging components. The last line in the configure function starts the application and displays its root component, which is named app by convention. You may, however, bypass the convention and pass the name of your root component as the first argument to setRoot, if you named it otherwise: aurelia.start().then(() => aurelia.setRoot('root')); Here, the root component is expected to sit in the src/root.html and src/root.js files. Summary Getting started with Aurelia is very easy, thanks to the CLI. Installing the tooling and creating an empty project is simply a matter of running a couple of commands, and it takes typically more time waiting for the initial npm install to complete than doing the actual setup. We’ll go over dependency injection and logging, and we’ll start building our application by adding components and configuring routes to navigate between them. Resources for Article: Further resources on this subject: Introduction to JavaScript [article] Breaking into Microservices Architecture [article] Create Your First React Element [article]
Read more
  • 0
  • 0
  • 11752
article-image-react-native-tools-and-resources
Packt
03 Jan 2017
14 min read
Save for later

React Native Tools and Resources

Packt
03 Jan 2017
14 min read
In this article written by Eric Masiello and Jacob Friedmann, authors of the book Mastering React Native we will cover: Tools that improve upon the React Native development experience Ways to build React Native apps for platforms other than iOS and Android Great online resources for React Native development (For more resources related to this topic, see here.) Evaluating React Native Editors, Plugins, and IDEs I'm hard pressed to think of another topic that developers are more passionate about than their preferred code editor. Of the many options, two popular editors today are GitHub's Atom and Microsoft's Visual Studio Code (not be confused with the Visual Studio 2015). Both are cross-platform editors for Windows, macOS, and Linux that are easily extended with additional features. In this section, I'll detail my personal experience with these tools and where I have found they complement the React Native development experience. Atom and Nuclide Facebook has created a package for Atom known as Nuclide that provides a first-class development environment for React Native It features a built-in debugger similar to Chrome's DevTools, a React Native Inspector (think the Elements tab in Chrome DevTools), and support for the static type checker Flow. Download Atom from https://atom.io/ and Nuclide from https://nuclide.io/. To install the Nuclide package, click on the Atom menu and then on Preferences..., and then select Packages. Search for Nuclide and click on Install. Once installed, you can actually start and stop the React Native Packager directly from Atom (though you need launch the simulator/editor separately) and set breakpoints in Atom itself rather than using Chrome's DevTools. Take a look at the following screenshot: If you plan to use Flow, Nuclide will identify errors and display them inline. Take the following example, I've annotated the function timesTen such that it expects a number as a parameter it should return a number. However, you can see that there's some errors in the usage. Refer to the following code snippet: /* @flow */ function timesTen(x: number): number { var result = x * 10; return 'I am not a number'; } timesTen("Hello, world!"); Thankfully, the Flow integration will call out these errors in Atom for you. Refer to the following screenshot: Flow integration of Nuclide exposes two other useful features. You'll see annotated auto completion as you type. And, if you hold the Command key and click on variable or function name, Nuclide will jump straight to the source definition, even if it’s defined in a separate file. Refer to the following screenshot: Visual Studio Code Visual Studio Code is a first class editor for JavaScript authors. Out of the box, it's packaged with a built in debugger that can be used to debug Node applications. Additionally, VS Code comes with an integrated Terminal and a git tool that nicely shows visual diffs. Download Visual Studio Code from https://code.visualstudio.com/. The React Native Tools extensions for VS Code add some useful capabilities to the editor. For starters, you'll be able to execute the React Native: Run-iOS and React Native: Run Android commands directly from VS Code without needing to reach for terminal, as shown in the following screenshot: And, while a bit more involved than Atom to configure, you can use VS Code as a React Native debugger. Take a look at the following screenshot: The React Native Tools extension also provides IntelliSense for much of the React Native API, as shown in the following screenshot: When reading through the VS Code documentation, I found it (unsurprisingly) more catered toward Windows users. So, if Windows is your thing, you may feel more at home with VS Code. As a macOS user, I slightly prefer Atom/Nuclide over VS Code. VS Code comes with more useful features out of the box but that easily be addressed by installing a few Atom packages. Plus,  I found the Flow support with Nuclide really useful. But don't let me dissuade you from VS Code. Both are solid editors with great React Native support. And they're both free so no harm in trying both. Before totally switching gears, there is one more editor worth mentioning. Deco is an Integrated Development Environment (IDE) built specifically for React Native development. Standing up a new React Native project is super quick since Deco keeps a local copy of everything you'd get when running react-native in it. Deco also makes creating new stateful and stateless components super easy. Download Deco from https://www.decosoftware.com/. Once you create a new component using Deco, it gives you a nicely prefilled template including a place to add propTypes and defaultProps (something I often forget to do). Refer to the following screenshot: From there, you can drag and drop components from the sidebar directly into your code. Deco will auto-populate many of the props for you as well as add the necessary import statements. Take a look at the following code snippet: <Image style={{ width: 300, height: 200, }} resizeMode={"contain"} source={{uri:'https://unsplash.it/600/400/?random'}}/> The other nice feature Deco adds is the ability to easily launch your app from the toolbar in any installed iOS simulator or Android AVD. You don't even need to first manually open the AVD, Deco will do it all for you. Refer to the following screenshot: Currently, creating a new project with Deco starts you off with an outdated version of React Native (version 0.27.2 as of this writing). If you're not concerned with using the latest version, Deco is a great way to get a React Native app up quickly. However, if you require more advanced tooling, I suggest you look at Atom with Nuclide or Visual Studio Code with the React Native Tools extension. Taking React Native beyond iOS and Android The development experience is one of the most highly touted features by React Native proponents. But as we well know by now, React Native is more than just a great development experience. It's also about building cross-platform applications with a common language and, often times, reusable code and components. Out of the box, the Facebook team has provided tremendous support for iOS and Android. And thanks to the community, React Native has expanded to other promising platforms. In this section, I'll take you through a few of these React Native projects. I won't go into great technical depth, but I'll provide a high-level overview and how to get each running. Introducing React Native Web React Native Web is an interesting one. It treats many of React Native components you've learned about, such as View, Text, and TextInput, as higher level abstractions that map to HTML elements, such as div, span, and input, thus allowing you to build a web app that runs in a browser from your React Native code. Now if you're like me, your initial reaction might be—But why? We already have React for the web. It's called... React! However, where React Native Web shines over React is in its ability to share components between your mobile app and the web because you're still working with the same basic React Native APIs. Learn more about React Native Web at https://github.com/necolas/react-native-web. Configuring React Native Web React Native Web can be installed into your existing React Native project just like any other npm dependency: npm install --save react react-native-web Depending on the version of React Native and React Native Web you've installed, you may encounter conflicting peer dependencies of React. This may require manually adjusting which version of React Native or React Native Web is installed. Sometimes, just deleting the node_modules folder and rerunning npm install does the trick. From there, you'll need some additional tools to build the web bundle. In this example, we'll use webpack and some related tooling: npm install webpack babel-loader babel-preset-react babel-preset-es2015 babel-preset-stage-1 webpack-validator webpack-merge --save npm install webpack-dev-server --save-dev Next, create a webpack.config.js in the root of the project: const webpack = require('webpack'); const validator = require('webpack-validator'); const merge = require('webpack-merge'); const target = process.env.npm_lifecycle_event; let config = {}; const commonConfig = { entry: { main: './index.web.js' }, output: { filename: 'app.js' }, resolve: { alias: { 'react-native': 'react-native-web' } }, module: { loaders: [ { test: /.js$/, exclude: /node_modules/, loader: 'babel', query: { presets: ['react', 'es2015', 'stage-1'] } } ] } }; switch(target) { case 'web:prod': config = merge(commonConfig, { devtool: 'source-map', plugins: [ new webpack.DefinePlugin({ 'process.env.NODE_ENV': JSON.stringify('production') }) ] }); break; default: config = merge(commonConfig, { devtool: 'eval-source-map' }); break; } module.exports = validator(config); Add the followingtwo entries to thescriptssection ofpackage.json: "web:dev": "webpack-dev-server --inline --hot", "web:prod": "webpack -p" Next, create an index.htmlfile in the root of the project: <!DOCTYPE html> <html> <head> <title>RNNYT</title> <meta charset="utf-8" /> <meta content="initial-scale=1,width=device-width" name="viewport" /> </head> <body> <div id="app"></div> <script type="text/javascript" src="/app.js"></script> </body> </html> And, finally, add an index.web.jsfile to the root of the project: import React, { Component } from 'react'; import { View, Text, StyleSheet, AppRegistry } from 'react-native'; class App extends Component { render() { return ( <View style={styles.container}> <Text style={styles.text}>Hello World!</Text> </View> ); } } const styles = StyleSheet.create({ container: { flex: 1, backgroundColor: '#efefef', alignItems: 'center', justifyContent: 'center' }, text: { fontSize: 18 } }); AppRegistry.registerComponent('RNNYT', () => App); AppRegistry.runApplication('RNNYT', { rootTag: document.getElementById('app') }); To run the development build, we'll run webpackdev server by executing the following command: npm run web:dev web:prod can be substituted to create a production ready build. While developing, you can add React Native Web specific code much like you can with iOS and Android by using Platform.OS === 'web' or by creating custom *.web.js components. React Native Web still feels pretty early days. Not every component and API is supported, and the HTML that's generated looks a bit rough for my tastes. While developing with React Native Web, I think it helps to keep the right mindset. That is, think of this as I'm building a React Native mobile app, not a website. Otherwise, you may find yourself reaching for web-specific solutions that aren't appropriate for the technology. React Native plugin for Universal Windows Platform Announced at the Facebook F8 conference in April, 2016,the React Native plugin for Universal Windows Platform (UWP)lets you author React Native apps for Windows 10, desktop Windows 10 mobile, and Xbox One. Learn more about React Native plugin for UWP at https://github.com/ReactWindows/react-native-windows. You'll need to be running Windows 10 in order to build UWP apps. You'll also need to follow the React Native documentation for configuring your Windows environment for building React Native apps. If you're not concerned with building Android on Windows, you can skip installing Android Studio. The plugin itself also has a few additional requirements. You'll need to be running at least npm 3.x and to install Visual Studio 2015 Community (not be confused with Visual Studio Code). Thankfully, the Community version is free to use. The UWP plugin docs also tell you to install the Windows 10 SDK Build 10586. However, I found it's easier to do that from within Visual Studio once we've created the app so that we can save that part for later. Configuring the React Native plugin for UWP I won't walk you through every step of the installation. The UWP plugin docs detail the process well enough. Once you've satisfied the requirements, start by creating a new React Native project as normal: react-native init RNWindows cd RNWindows Next, install and initialize the UWP plugin: npm install --save-dev rnpm-plugin-windows react-native windows Running react-native windows will actually create a windows directory inside your project containing a Visual Studio solution file. If this is your first time installing the plugin, I recommend opening the solution (.sln) file with Visual Studio 2015. Visual Studio will then ask you to download several dependencies including the latest Windows 10 SDK. Once Visual Studio has installed all the dependencies, you can run the app either from within Visual Studio or by running the following command: react-native run-windows Take a look at the following screenshot: React Native macOS Much as the name implies, React Native allows you to create macOS desktop applications using React Native. This project works a little differently than the React Native Web and the React Native plugin for UWP. As best I can tell, since React Native macOS requires its own custom CLI for creating and packaging applications, you are not able to build a macOS and mobile app from the same project. Learn more about React Native macOS at https://github.com/ptmt/react-native-macos. Configuring React Native macOS Much like you did with the React Native CLI, begin by installing the custom CLI globally by using the following command: npm install react-native-macos-cli -g Then, use it to create a new React Native macOS app by running the following command: react-native-macos init RNDesktopApp cd RNDesktopApp This will set you up with all required dependencies along with an entry point file, index.macos.js. There is no CLI command to spin up the app, so you'll need to open the Xcode project and manually run it. Run the following command: open macos/RNDesktopApp.xcodeproj The documentation is pretty limited, but there is a nice UIExplorer app that can be downloaded and run to give you a good feel for what's available. While on some level it's unfortunate your macOS app cannot live alongside your iOS and Android code, I cannot think of a use case that would call for such a thing. That said, I was delighted with how easy it was to get this project up and running. Summary I think it's fair to say that React Native is moving quickly. With a new version released roughly every two weeks, I've lost count of how many versions have passed by in the course of writing this book. I'm willing to bet React Native has probably bumped a version or two from the time you started reading this book until now. So, as much as I'd love to wrap up by saying you now know everything possible about React Native, sadly that isn't the case. References Let me leave you with a few valuable resources to continue your journey of learning and building apps with React Native: React Native Apple TV is a fork of React Native for building apps for Apple's tvOS. For more information, refer to https://github.com/douglowder/react-native-appletv. (Note that preliminary tvOS support has appeared in early versions of React Native 0.36.) React Native Ubuntu is another fork of React Native for developing React Native apps on Ubuntu for Desktop Ubuntu and Ubuntu Touch. For more information, refer to https://github.com/CanonicalLtd/react-native JS.Coach is a collection of community favorite components and plugins for all things React, React Native, Webpack, and related tools. For more information, refer to https://js.coach/react-native Exponent is described as Rails for React Native. It supports additional system functionality and UI components beyond what's provided by React Native. It will also let you build your apps without needing to touch Xcode or Android Studio. For more information, refer to https://getexponent.com/ React Native Elements is a cross-platform UI toolkit for React Native. You can think of it as Bootstrap for React Native. For more information, refer to https://github.com/react-native-community/react-native-elements The Use React Native site is how I keep up with React Native releases and news in the React Native space. For more information, refer to http://www.reactnative.com/ React Native Radio is fantastic podcast hosted by Nader Dabit and a panel of hosts that interview other developers contributing to the React Native community. For more information, refer to https://devchat.tv/react-native-radio React Native Newsletter is an occasional newsletter curated by a team of React Native enthusiasts. For more information, refer to http://reactnative.cc/ And, finally, Dotan J. Nahum maintains an amazing resource titled Awesome React Native that includes articles, tutorials, videos, and well tested components you can use in your next project. For more information, refer to https://github.com/jondot/awesome-react-native Resources for Article: Further resources on this subject: Getting Started [article] Getting Started with React [article] Understanding React Native Fundamentals [article]
Read more
  • 0
  • 0
  • 26842

article-image-enterprise-architecture-concepts
Packt
02 Jan 2017
8 min read
Save for later

Enterprise Architecture Concepts

Packt
02 Jan 2017
8 min read
In this article by Habib Ahmed Qureshi, Ganesan Senthilvel, and Ovais Mehboob Ahmed Khan, author of the book Enterprise Application Architecture with .NET Core, you will learn how to architect and design highly scalable, robust, clean, and highly performant applications in .NET Core 1.0. (For more resources related to this topic, see here.) In this article, we will cover the following topics: Why we need Enterprise Architecture? Knowing the role of an architect Why we need Enterprise Architecture? We will need to define, or at least provide, some basic fixed points to identify enterprise architecture specifically. Sketch Before playing an enterprise architect role, I used to get confused with so many architectural roles and terms, such as architect, solution architect, enterprise architect, data architect, blueprint, system diagram, and so on. In general, the industry perception is that the IT architect role is to draw few boxes with few suggestions; rest is with the development community. They feel that the architect role is quite easy just by drawing the diagram and not doing anything else. Like I said, it is completely a perception of few associates in the industry, and I used to be dragged by this category earlier: However, my enterprise architect job has cleared this perception and understands the true value of an enterprise architect. Definition of Enterprise Architecture In simple terms, enterprise is nothing but human endeavor. The objective of an enterprise is where people are collaborating for a particular purpose supported by a platform. Let me explain with an example of an online e-commerce company. Employees of that company are people who worked together to produce the profit of the firm using their various platforms, such as infrastructure, software, equipment, building, and so on. Enterprise has the structure/arrangements of all these pieces/components to build the complete organization. This is the exact place where enterprise architecture plays its key role. Every enterprise has an enterprise architect. EA is a process of architecting that applies the discipline to produce the prescribed output components. This process needs the experience, skill, discipline, and descriptions. Consider the following image where EA anticipates the system in two key states: Every enterprise needs an enterprise architect, not an optional. Let me give a simple example. When you need a car for business activities, you have two choices, either drive yourself or rent a driver. Still, you will need the driving capability to operate the car. EA is pretty similar to it. As depicted in the preceding diagram, EA anticipates the system in two key states, which are as follows: How it currently is How will it be in the future Basically, they work on options/alternatives to move from current to future state of an enterprise system. In this process, Enterprise Architecture does the following: Creates the frameworks to manage the architect Details the descriptions of the architect Roadmaps to lay the best way to change/improve the architecture Defines constraint/opportunity Anticipates the costs and benefits Evaluates the risks and values In this process of architecting, the system applies the discipline to produce the prescribed output components. Stakeholders of Enterprise Architecture Enterprise Architecture is so special because to its holistic view of management and evolution of an enterprise holistically. It has the unique combination of specialist technology, such as architecture frameworks and design pattern practices. Such a special EA has the following key stakeholders/users in its eco system: S.No. Stakeholders Organizational actions 1  Strategic planner Capability planning Set strategic direction Impact analysis 2  Decision makers Investment Divestment Approvals for the project Alignment with strategic direction 3  Analyst Quality assurance Compliance Alignment with business goals 4  Architects, project managers Solution development Investigate the opportunities Analysis of the existing options Business benefits Though many organizations intervened without EAs, every firm has the strong belief that it is better to architect before creating any system. It is integrated in coherent fashion with proactively designed system instead of random ad hoc and inconsistent mode. In terms of business benefits, cost is the key factor in the meaning of Return on Investment (RoI). That is how the industry business is driven in this highly competitive IT world. EA has the opportunity to prove its value for its own stakeholders with three major benefits, ranging from tactical to strategic positions. They are as follows: Cost reduction by technology standardization Business Process Improvement (BPI) Strategic differentiation Gartner's research paper on TCO: The First Justification for Enterprise IT Architecture by Colleen Young is one of the good references to justify the business benefits of an Enterprise Architecture. Check out https://www.gartner.com/doc/388268/enterprise-architecture-benefits-justification for more information. In the grand scheme of cost saving strategy, technology standardization adds a lot of efficiency to make the indirect benefits. Let me share my experience in this space. In one of my earlier legacy organization, it was noticed that the variety of technologies and products were built to server the business purpose due to the historical acquisitions and mergers. The best solution was platform standardization. All businesses have processes; few life examples are credit card processing, employee on-boarding, student enrollment, and so on. In this methodology, there are people involved with few steps for the particular system to get things done. In means of the business growth, the processes become chaotic, which leads to the duplicate efforts across the departments. Here, we miss the cross learning of the mistakes and corrections. BPI is an industry approach that is designed to support the enterprise for the realignment of the existing business operational process into the significant improved process. It helps the enterprise to identify and adopt in a better way using the industry tools and techniques. BPI is originally designed to induce a drastic game changing effect in the enterprise performance instead of bringing the changes in the incremental steps. In the current highly competitive market, strategic differentiation efforts make the firm create the perception in customers minds of receiving something of greater value than offered by the competition. An effective differentiation strategy is the best tool to highlight a business's unique features and make it stand out from the crowd. As the outcome of strategic differentiation, the business should realize the benefits on Enterprise Architecture investment. Also, it makes the business to institute the new ways of thinking to add the new customer segments along with new major competitive strategies. Knowing the role of an architect When I planned to switch my career to architecture track, I had too many questions in mind. People were referring to so many titles in the industry, such as architect, solution architect, enterprise architect, data architect, infra architect, and so on that I didn't know where exactly do I need to start and end. Industry had so many confusions to opt for. To understand it better, let me give my own work experiences as the best use cases. In the IT industry, two higher-level architects are named as follows: Solution architect (SA) Enterprise architect (EA) In my view, Enterprise Architecture is a much broader discipline than Solution Architecture with the sum of Business Architecture, Application Architecture, Data Architecture, and Technology Architecture. It will be covered in detail in the subsequent section: SA is focused on a specific solution and addresses the technological details that are compiled to the standards, roadmaps, and strategic objectives of the business. On comparing with SA, EA is at senior level. In general, EA takes a strategic, inclusive, and long term view at goals, opportunities, and challenges facing the company. However, SA is assigned to a particular project/program in an enterprise to ensure technical integrity and consistency of the solution at every stage of its life cycle. Role comparison between EA and SA Let me explain the working experiences of two different roles—EA and SA. When I played the SA role for Internet based telephony system, my role needs to build the tools, such as code generation, automation, and so on around the existing telephony system. It needs the skill set of the Microsoft platform technology and telephony domain to understand the existing system in a better way and then provide the better solution to improve the productivity and performance of the existing ecosystem. I was not really involved in the enterprise-level decision making process. Basically, it was pretty much like an individual contributor to build effective and efficient solutions to improvise the current system. As the second work, let me share my experience on the EA role for a leading financial company. The job was to build the enterprise data hub using the emerging big data technology. Degree of comparisons If we plot EA versus SA graphically, EA needs the higher degree of strategy focus and technology breath, as depicted in the following image: In terms of roles and responsibilities, EA and SA differ in their scope. Basically, the SA scope is limited within a project team and the expected delivery is to make the system quality of the solution to the business. In the same time, the EA scope is beyond SA by identifying or envisioning the future state of an organization. Summary In this article, you understood the fundamental concepts of enterprise architecture, and its related business need and benefits. Resources for Article: Further resources on this subject: Getting Started with ASP.NET Core and Bootstrap 4 [Article] Setting Up the Environment for ASP.NET MVC 6 [Article] How to Set Up CoreOS Environment [Article]
Read more
  • 0
  • 0
  • 17848
Modal Close icon
Modal Close icon