Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
Packt
02 Mar 2015
19 min read
Save for later

Entity Framework DB First – Inheritance Relationships between Entities

Packt
02 Mar 2015
19 min read
This article is written by Rahul Rajat Singh, the author of Mastering Entity Framework. So far, we have seen how we can use various approaches of Entity Framework, how we can manage database table relationships, and how to perform model validations using Entity Framework. In this article, we will see how we can implement the inheritance relationship between the entities. We will see how we can change the generated conceptual model to implement the inheritance relationship, and how it will benefit us in using the entities in an object-oriented manner and the database tables in a relational manner. (For more resources related to this topic, see here.) Domain modeling using inheritance in Entity Framework One of the major challenges while using a relational database is to manage the domain logic in an object-oriented manner when the database itself is implemented in a relational manner. ORMs like Entity Framework provide the strongly typed objects, that is, entities for the relational tables. However, it might be possible that the entities generated for the database tables are logically related to each other, and they can be better modeled using inheritance relationships rather than having independent entities. Entity Framework lets us create inheritance relationships between the entities, so that we can work with the entities in an object-oriented manner, and internally, the data will get persisted in the respective tables. Entity Framework provides us three ways of object relational domain modeling using the inheritance relationship: The Table per Type (TPT) inheritance The Table per Class Hierarchy (TPH) inheritance The Table per Concrete Class (TPC) inheritance Let's now take a look at the scenarios where the generated entities are not logically related, and how we can use these inheritance relationships to create a better domain model by implementing inheritance relationships between entities using the Entity Framework Database First approach. The Table per Type inheritance The Table per Type (TPT) inheritance is useful when our database has tables that are related to each other using a one-to-one relationship. This relation is being maintained in the database by a shared primary key. To illustrate this, let's take a look at an example scenario. Let's assume a scenario where an organization maintains a database of all the people who work in a department. Some of them are employees getting a fixed salary, and some of them are vendors who are hired at an hourly rate. This is modeled in the database by having all the common data in a table called Person, and there are separate tables for the data that is specific to the employees and vendors. Let's visualize this scenario by looking at the database schema: The database schema showing the TPT inheritance database schema The ID column for the People table can be an auto-increment identity column, but it should not be an auto-increment identity column for the Employee and Vendors tables. In the preceding figure, the People table contains all the data common to both type of worker. The Employee table contains the data specific to the employees and the Vendors table contains the data specific to the vendors. These tables have a shared primary key and thus, there is a one-to-one relationship between the tables. To implement the TPT inheritance, we need to perform the following steps in our application: Generate the default Entity Data Model. Delete the default relationships. Add the inheritance relationship between the entities. Use the entities via the DBContext object. Generating the default Entity Data Model Let's add a new ADO.NET Entity Data Model to our application, and generate the conceptual Entity Model for these tables. The default generated Entity Model will look like this: The generated Entity Data Model where the TPT inheritance could be used Looking at the preceding conceptual model, we can see that Entity Framework is able to figure out the one-to-one relationship between the tables and creates the entities with the same relationship. However, if we take a look at the generated entities from our application domain perspective, it is fairly evident that these entities can be better managed if they have an inheritance relationship between them. So, let's see how we can modify the generated conceptual model to implement the inheritance relationship, and Entity Framework will take care of updating the data in the respective tables. Deleting default relationships The first thing we need to do to create the inheritance relationship is to delete the existing relationship from the Entity Model. This can be done by right-clicking on the relationship and selecting Delete from Model as follows: Deleting an existing relationship from the Entity Model Adding inheritance relationships between entities Once the relationships are deleted, we can add the new inheritance relationships in our Entity Model as follows: Adding inheritance relationships in the Entity Model When we add an inheritance relationship, the Visual Entity Designer will ask for the base class and derived class as follows: Selecting the base class and derived class participating in the inheritance relationship Once the inheritance relationship is created, the Entity Model will look like this: Inheritance relationship in the Entity Model After creating the inheritance relationship, we will get a compile error that the ID property is defined in all the entities. To resolve this problem, we need to delete the ID column from the derived classes. This will still keep the ID column that maps the derived classes as it is. So, from the application perspective, the ID column is defined in the base class but from the mapping perspective, it is mapped in both the base class and derived class, so that the data will get inserted into tables mapped in both the base and derived entities. With this inheritance relationship in place, the entities can be used in an object-oriented manner, and Entity Framework will take care of updating the respective tables for each entity. Using the entities via the DBContext object As we know, DbContext is the primary class that should be used to perform various operations on entities. Let's try to use our SampleDbContext class to create an Employee and a Vendor using this Entity Model and see how the data gets updated in the database: using (SampleDbEntities db = new SampleDbEntities()) { Employee employee = new Employee(); employee.FirstName = "Employee 1"; employee.LastName = "Employee 1"; employee.PhoneNumber = "1234567"; employee.Salary = 50000; employee.EmailID = "employee1@test.com"; Vendor vendor = new Vendor(); vendor.FirstName = "vendor 1"; vendor.LastName = "vendor 1"; vendor.PhoneNumber = "1234567"; vendor.HourlyRate = 100; vendor.EmailID = "vendor1@test.com"; db.Workers.Add(employee); db.Workers.Add(vendor); db.SaveChanges(); } In the preceding code, what we are doing is creating an object of the Employee and Vendor type, and then adding them to People using the DbContext object. What Entity Framework will do internally is that it will look at the mappings of the base entity and the derived entities, and then push the respective data into the respective tables. So, if we take a look at the data inserted in the database, it will look like the following: A database snapshot of the inserted data It is clearly visible from the preceding database snapshot that Entity Framework looks at our inheritance relationship and pushes the data into the Person, Employee, and Vendor tables. The Table per Class Hierarchy inheritance The Table per Class Hierarchy (TPH) inheritance is modeled by having a single database table for all the entity classes in the inheritance hierarchy. The TPH inheritance is useful in cases where all the information about the related entities is stored in a single table. For example, using the earlier scenario, let's try to model the database in such a way that it will only contain a single table called Workers to store the Employee and Vendor details. Let's try to visualize this table: A database schema showing the TPH inheritance database schema Now what will happen in this case is that the common fields will be populated whenever we create a type of worker. Salary will only contain a value if the worker is of type Employee. The HourlyRate field will be null in this case. If the worker is of type Vendor, then the HourlyRate field will have a value, and Salary will be null. This pattern is not very elegant from a database perspective. Since we are trying to keep unrelated data in a single table, our table is not normalized. There will always be some redundant columns that contain null values if we use this approach. We should try not to use this pattern unless it is absolutely needed. To implement the TPH inheritance relationship using the preceding table structure, we need to perform the following activities: Generate the default Entity Data Model. Add concrete classes to the Entity Data Model. Map the concrete class properties to their respective tables and columns. Make the base class entity abstract. Use the entities via the DBContext object. Let's discuss this in detail. Generating the default Entity Data Model Let's now generate the Entity Data Model for this table. The Entity Framework will create a single entity, Worker, for this table: The generated model for the table created for implementing the TPH inheritance Adding concrete classes to the Entity Data Model From the application perspective, it would be a much better solution if we have classes such as Employee and Vendor, which are derived from the Worker entity. The Worker class will contain all the common properties, and Employee and Vendor will contain their respective properties. So, let's add new entities for Employee and Vendor. While creating the entity, we can specify the base class entity as Worker, which is as follows: Adding a new entity in the Entity Data Model using a base class type Similarly, we will add the Vendor entity to our Entity Data Model, and specify the Worker entity as its base class entity. Once the entities are generated, our conceptual model will look like this: The Entity Data Model after adding the derived entities Next, we have to remove the Salary and HourlyRate properties from the Worker entity, and put them in the Employee and the Vendor entities respectively. So, once the properties are put into the respective entities, our final Entity Data model will look like this: The Entity Data Model after moving the respective properties into the derived entities Mapping the concrete class properties to the respective tables and columns After this, we have to define the column mappings in the derived classes to let the derived classes know which table and column should be used to put the data. We also need to specify the mapping condition. The Employee entity should save the Salary property's value in the Salary column of the Workers table when the Salary property is Not Null and HourlyRate is Null: Table mapping and conditions to map the Employee entity to the respective tables Once this mapping is done, we have to mark the Salary property as Nullable=false in the entity property window. This will let Entity Framework know that if someone is creating an object of the Employee type, then the Salary field is mandatory: Setting the Employee entity properties as Nullable Similarly, the Vendor entity should save the HourlyRate property's value in the HourlyRate column of the Workers table when Salary is Null and HourlyRate is Not Null: Table mapping and conditions to map the Vendor entity to the respective tables And similar to the Employee class, we also have to mark the HourlyRate property as Nullable=false in the Entity Property window. This will help Entity Framework know that if someone is creating an object of the Vendor type, then the HourlyRate field is mandatory: Setting the Vendor entity properties to Nullable Making the base class entity abstract There is one last change needed to be able to use these models. To be able to use these models, we need to mark the base class as abstract, so that Entity Framework is able to resolve the object of Employee and Vendors to the Workers table. Making the base class Workers as abstract This will also be a better model from the application perspective because the Worker entity itself has no meaning from the application domain perspective. Using the entities via the DBContext object Now we have our Entity Data Model configured to use the TPH inheritance. Let's try to create an Employee object and a Vendor object, and add them to the database using the TPH inheritance hierarchy: using (SampleDbEntities db = new SampleDbEntities()){Employee employee = new Employee();employee.FirstName = "Employee 1";employee.LastName = "Employee 1";employee.PhoneNumber = "1234567";employee.Salary = 50000;employee.EmailID = "employee1@test.com";Vendor vendor = new Vendor();vendor.FirstName = "vendor 1";vendor.LastName = "vendor 1";vendor.PhoneNumber = "1234567";vendor.HourlyRate = 100;vendor.EmailID = "vendor1@test.com";db.Workers.Add(employee);db.Workers.Add(vendor);db.SaveChanges();} In the preceding code, we created objects of the Employee and Vendor types, and then added them to the Workers collection using the DbContext object. Entity Framework will look at the mappings of the base entity and the derived entities, will check the mapping conditions and the actual values of the properties, and then push the data to the respective tables. So, let's take a look at the data inserted in the Workers table: A database snapshot after inserting the data using the Employee and Vendor entities So, we can see that for our Employee and Vendor models, the actual data is being kept in the same table using Entity Framework's TPH inheritance. The Table per Concrete Class inheritance The Table per Concrete Class (TPC) inheritance can be used when the database contains separate tables for all the logical entities, and these tables have some common fields. In our existing example, if there are two separate tables of Employee and Vendor, then the database schema would look like the following: The database schema showing the TPC inheritance database schema One of the major problems in such a database design is the duplication of columns in the tables, which is not recommended from the database normalization perspective. To implement the TPC inheritance, we need to perform the following tasks: Generate the default Entity Data Model. Create the abstract class. Modify the CDSL to cater to the change. Specify the mapping to implement the TPT inheritance. Use the entities via the DBContext object. Generating the default Entity Data Model Let's now take a look at the generated entities for this database schema: The default generated entities for the TPC inheritance database schema Entity Framework has given us separate entities for these two tables. From our application domain perspective, we can use these entities in a better way if all the common properties are moved to a common abstract class. The Employee and Vendor entities will contain the properties specific to them and inherit from this abstract class to use all the common properties. Creating the abstract class Let's add a new entity called Worker to our conceptual model and move the common properties into this entity: Adding a base class for all the common properties Next, we have to mark this class as abstract from the properties window: Marking the base class as abstract class Modifying the CDSL to cater to the change Next, we have to specify the mapping for these tables. Unfortunately, the Visual Entity Designer has no support for this type of mapping, so we need to perform this mapping ourselves in the EDMX XML file. The conceptual schema definition language (CSDL) part of the EDMX file is all set since we have already moved the common properties into the abstract class. So, now we should be able to use these properties with an abstract class handle. The problem will come in the storage schema definition language (SSDL) and mapping specification language (MSL). The first thing that we need to do is to change the SSDL to let Entity Framework know that the abstract class Worker is capable of saving the data in two tables. This can be done by setting the EntitySet name in the EntityContainer tags as follows: <EntityContainer Name="todoDbModelStoreContainer">   <EntitySet Name="Employee" EntityType="Self.Employee" Schema="dbo" store_Type="Tables" />   <EntitySet Name="Vendor" EntityType="Self.Vendor" Schema="dbo" store_Type="Tables" /></EntityContainer> Specifying the mapping to implement the TPT inheritance Next, we need to change the MSL to properly map the properties to the respective tables based on the actual type of object. For this, we have to specify EntitySetMapping. The EntitySetMapping should look like the following: <EntityContainerMapping StorageEntityContainer="todoDbModelStoreContainer" CdmEntityContainer="SampleDbEntities">    <EntitySetMapping Name="Workers">   <EntityTypeMapping TypeName="IsTypeOf(SampleDbModel.Vendor)">       <MappingFragment StoreEntitySet="Vendor">       <ScalarProperty Name="HourlyRate" ColumnName="HourlyRate" />       <ScalarProperty Name="EMailId" ColumnName="EMailId" />       <ScalarProperty Name="PhoneNumber" ColumnName="PhoneNumber" />       <ScalarProperty Name="LastName" ColumnName="LastName" />       <ScalarProperty Name="FirstName" ColumnName="FirstName" />       <ScalarProperty Name="ID" ColumnName="ID" />       </MappingFragment>   </EntityTypeMapping>      <EntityTypeMapping TypeName="IsTypeOf(SampleDbModel.Employee)">       <MappingFragment StoreEntitySet="Employee">       <ScalarProperty Name="ID" ColumnName="ID" />       <ScalarProperty Name="Salary" ColumnName="Salary" />       <ScalarProperty Name="EMailId" ColumnName="EMailId" />       <ScalarProperty Name="PhoneNumber" ColumnName="PhoneNumber" />       <ScalarProperty Name="LastName" ColumnName="LastName" />       <ScalarProperty Name="FirstName" ColumnName="FirstName" />       </MappingFragment>   </EntityTypeMapping>   </EntitySetMapping></EntityContainerMapping> In the preceding code, we specified that if the actual type of object is Vendor, then the properties should map to the columns in the Vendor table, and if the actual type of entity is Employee, the properties should map to the Employee table, as shown in the following screenshot: After EDMX modifications, the mapping are visible in Visual Entity Designer If we now open the EDMX file again, we can see the properties being mapped to the respective tables in the respective entities. Doing this mapping from Visual Entity Designer is not possible, unfortunately. Using the entities via the DBContext object Let's use these "entities from our code: using (SampleDbEntities db = new SampleDbEntities()) { Employee employee = new Employee(); employee.FirstName = "Employee 1"; employee.LastName = "Employee 1"; employee.PhoneNumber = "1234567"; employee.Salary = 50000; employee.EMailId = "employee1@test.com"; Vendor vendor = new Vendor(); vendor.FirstName = "vendor 1"; vendor.LastName = "vendor 1"; vendor.PhoneNumber = "1234567"; vendor.HourlyRate = 100; vendor.EMailId = "vendor1@test.com"; db.Workers.Add(employee); db.Workers.Add(vendor); db.SaveChanges(); } In the preceding code, we created objects of the Employee and Vendor types and saved them using the Workers entity set, which is actually an abstract class. If we take a look at the inserted database, we will see the following: Database snapshot of the inserted data using TPC inheritance From the preceding screenshot, it is clear that the data is being pushed to the respective tables. The insert operation we saw in the previous code is successful but there will be an exception in the application. This exception is because when Entity Framework tries to access the values that are in the abstract class, it finds two records with same ID, and since the ID column is specified as a primary key, two records with the same value is a problem in this scenario. This exception clearly shows that the store/database generated identity columns will not work with the TPC inheritance. If we want to use the TPC inheritance, then we either need to use GUID based IDs, or pass the ID from the application, or perhaps use some database mechanism that can maintain the uniqueness of auto-generated columns across multiple tables. Choosing the inheritance strategy Now that we know about all the inheritance strategies supported by Entity Framework, let's try to analyze these approaches. The most important thing is that there is no single strategy that will work for all the scenarios. Especially if we have a legacy database. The best option would be to analyze the application requirements and then look at the existing table structure to see which approach is best suited. The Table per Class Hierarchy inheritance tends to give us denormalized tables and have redundant columns. We should only use it when the number of properties in the derived classes is very less, so that the number of redundant columns is also less, and this denormalized structure will not create problems over a period of time. Contrary to TPH, if we have a lot of properties specific to derived classes and only a few common properties, we can use the Table per Concrete Class inheritance. However, in this approach, we will end up with some properties being repeated in all the tables. Also, this approach imposes some limitations such as we cannot use auto-increment identity columns in the database. If we have a lot of common properties that could go into a base class and a lot of properties specific to derived classes, then perhaps Table per Type is the best option to go with. In any case, complex inheritance relationships that become unmanageable in the long run should be avoided. One alternative could be to have separate domain models to implement the application logic in an object-oriented manner, and then use mappers to map these domain models to Entity Framework's generated entity models. Summary In this article, we looked at the various types of inheritance relationship using Entity Framework. We saw how these inheritance relationships can be implemented, and some guidelines on which should be used in which scenario. Resources for Article: Further resources on this subject: Working with Zend Framework 2.0 [article] Hosting the service in IIS using the TCP protocol [article] Applying LINQ to Entities to a WCF Service [article]
Read more
  • 0
  • 0
  • 15753

article-image-how-to-add-frameworks-with-carthage
Fabrizio Brancati
27 Sep 2016
5 min read
Save for later

How to Add Frameworks to iOS Applications with Carthage

Fabrizio Brancati
27 Sep 2016
5 min read
With the advent of iOS 8, Apple allowed the option of creating dynamic frameworks. In this post, you will learn how to create a dynamic framework from the ground up, and you will use Carthage to add frameworks to your Apps. Let’s get started! Creating Xcode project Open Xcode and create a new project. Select Frameworks & Library under the iOS menù from the templates and then Cocoa Touch Framework. Type a name for your framework and select Swift for the language. Now we will create a framework that helps to store data using NSUserDefaults. We can name it DataStore, which is a generic name, in case we want to expand it in the future to allow for the use of other data stores such as CoreData. The project is now empty and you have to add your first class, so add a new Swift file and name it DataStore, like the framework name. You need to create the class: public enum DataStoreType { case UserDefaults } public class DataStore { private init() {} public static func save(data: AnyObject, forKey key: String, in store: DataStoreType) { switch store { case .UserDefaults: NSUserDefaults.standardUserDefaults().setObject(data, forKey: key) } } public static func read(forKey key: String, in store: DataStoreType) -> AnyObject? { switch store { case .UserDefaults: return NSUserDefaults.standardUserDefaults().objectForKey(key) } } public static func delete(forKey key: String, in store: DataStoreType) { switch store { case .UserDefaults: NSUserDefaults.standardUserDefaults().removeObjectForKey(key) } } } Here we have created a DataStoreType enum to allow the expand feature in the future, and the DataStore class with the functions to save, read and delete. That’s it! You have just created the framework! How to use the framework To use the created framework, build it with CMD + B, right-click on the framework in the Products folder in the Xcode project, and click on Show in Finder. To use it you must drag and dropbox this file in your project. In this case, we will create an example project to show you how to do it. Add the framework to your App project by adding it in the Embedded Binaries section in the General page of the Xcode project. Note that if you see it duplicated in the Linked Frameworks and Libraries section, you can remove the first one. You have just included your framework in the App. Now we have to use it, so import it (I will import it in the ViewController class for test purposes, but you can include it whenever you want). And let’s use the DataStore framework by saving and reading a String from the NSUserDefaults. This is the code: import UIKit import DataStore class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view, typically from a nib. DataStore.save("Test", forKey: "Test", in: .UserDefaults) print(DataStore.read(forKey: "Test", in: .UserDefaults)!) } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } } Build the App and see the framework do its work! You should see this in the Xcode console: Test Now you have created a framework in Swift and you have used it with an App! Note that the framework created for the iOS Simulator is different from the one created for a device, because is built for a different architetture. To build a universal framework, you can use Carthage, which is shown in the next section. Using Carthage Carthage is a decentralized dependency manager that builds your dependencies and provides you with binary frameworks. To install it you can download the Carthage.pkg file from GitHub or with Homebrew: brew update brew install carthage Because Carthage is only able to build a framework from Git, we will use Alamofire, a popular HTTP networking library available on GitHub. Oper the project folder and create a file named Cartfile. Here is where we will tell Carthage what it has to build and in what version: github “Alamofire/Alamofire” We don’t specify a version because this is only a test, but it’s good practice. Here you can see an example, but opening the Terminal App, going into the project folder, and typing: carthage update You should see Carthage do some things, but when it has finished, with Finder go to project folder, then Carthage, Build, iOS and there is where the framework is[VP1] . To add it to the App, we have to do more work than what we have done before. Drag and drop the framework from the Carthage/Build/iOS folder in the Linked Frameworks and Libraries section on the General setting tab of the Xcode project. On the Build Phases tab, click on the + icon and choose New Run Script Phase with the following script: /usr/local/bin/carthage copy-frameworks Now you can add the paths of the frameworks under Input Files, which in this case is: $(SRCROOT)/FrameworkTest/Carthage/Build/iOS/Alamofire.framework This script works around an App Store submission bug triggered by universal binaries and ensures that the necessary bitcode-related files and dSYMs are copied when archiving. Now you only have to import the frameworks in your Swift file and use it like we did earlier in this post! Summary In this post, you learned how to create a custom framework for creating shared code between your apps, along with the creation of a GitHub repository to share your open source framework with the community of developers. You also learned how to use Carthage for your GitHub repository, or with a popular framework like Alamofire, and how to import it in your Apps. About The Author Fabrizio Brancati is a mobile app developer and web developer currently working and living in Milan, Italy, with a passion for innovation and discover new things. He develops with Objective-C for iOS 3 and iPod touch. When Swift came out, he learned it and was so excited that he remade an Objective-C framework available on GitHub in Swift (BFKit / BFKit-Swift). Software development is his driving passion, and he loves when others make use of his software.
Read more
  • 0
  • 0
  • 15637

article-image-push-your-data-web
Packt
22 Feb 2016
27 min read
Save for later

Push your data to the Web

Packt
22 Feb 2016
27 min read
This article covers the following topics: An introduction to the Shiny app framework Creating your first Shiny app The connection between the server file and the user interface The concept of reactive programming Different types of interface layouts, widgets, and Shiny tags How to create a dynamic user interface Ways to share your Shiny applications with others How to deploy Shiny apps to the web (For more resources related to this topic, see here.) Introducing Shiny – the app framework The Shiny package delivers a powerful framework to build fully featured interactive Web applications just with R and RStudio. Basic Shiny applications typically consist of two components: ~/shinyapp |-- ui.R |-- server.R While the ui.R function represents the appearance of the user interface, the server.R function contains all the code for the execution of the app. The look of the user interface is based on the famous Twitter bootstrap framework, which makes the look and layout highly customizable and fully responsive. In fact, you only need to know R and how to use the shiny package to build a pretty web application. Also, a little knowledge of HTML, CSS, and JavaScript may help. If you want to check the general possibilities and what is possible with the Shiny package, it is advisable to take a look at the inbuilt examples. Just load the library and enter the example name: library(shiny) runExample("01_hello") As you can see, running the first example opens the Shiny app in a new window. This app creates a simple histogram plot where you can interactively change the number of bins. Further, this example allows you to inspect the corresponding ui.R and server.R code files. There are currently eleven inbuilt example apps: 01_hello 02_text 03_reactivity 04_mpg 05_sliders 06_tabsets 07_widgets 08_html 09_upload 10_download 11_timer These examples focus mainly on the user interface possibilities and elements that you can create with Shiny. Creating a new Shiny web app with RStudio RStudio offers a fast and easy way to create the basis of every new Shiny app. Just click on New Project and select the New Directory option in the newly opened window: After that, click on the Shiny Web Application field: Give your new app a name in the next step, and click on Create Project: RStudio will then open a ready-to-use Shiny app by opening a prefilled ui.R and server.R file: You can click on the now visible Run App button in the right corner of the file pane to display the prefilled example application. Creating your first Shiny application In your effort to create your first Shiny application, you should first create or consider rough sketches for your app. Questions that you might ask in this context are, What do I want to show? How do I want it to show?, and so on. Let's say we want to create an application that allows users to explore some of the variables of the mtcars dataset. The data was extracted from the 1974 Motor Trend US magazine, and comprises fuel consumption and 10 aspects of automobile design and performance for 32 automobiles (1973–74 models). Sketching the final app We want the user of the app to be able to select one out of the three variables of the dataset that gets displayed in a histogram. Furthermore, we want users to get a summary of the dataset under the main plot. So, the following figure could be a rough project sketch: Constructing the user interface for your app We will reuse the already opened ui.R file from the RStudio example, and adapt it to our needs. The layout of the ui.R file for your first app is controlled by nested Shiny functions and looks like the following lines: library(shiny) shinyUI(pageWithSidebar( headerPanel("My First Shiny App"), sidebarPanel( selectInput(inputId = "variable", label = "Variable:", choices = c ("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), mainPanel( plotOutput("carsPlot"), verbatimTextOutput ("carsSummary") ) )) Creating the server file The server file holds all the code for the execution of the application: library(shiny) library(datasets) shinyServer(function(input, output) { output$carsPlot <- renderPlot({ hist(mtcars[,input$variable], main = "Histogram of mtcars variables", xlab = input$variable) }) output$carsSummary <- renderPrint({ summary(mtcars[,input$variable]) }) }) The final application After changing the ui.R and the server.R files according to our needs, just hit the Run App button and the final app opens in a new window: As planned in the app sketch, the app offers the user a drop-down menu to choose the desired variable on the left side, and shows a histogram and data summary of the selected variable on the right side. Deconstructing the final app into its components For a better understanding of the Shiny application logic and the interplay of the two main files, ui.R and server.R, we will disassemble your first app again into its individual parts. The components of the user interface We have divided the user interface into three parts: After loading the Shiny library, the complete look of the app gets defined by the shinyUI() function. In our app sketch, we chose a sidebar look; therefore, the shinyUI function holds the argument, pageWithSidebar(): library(shiny) shinyUI(pageWithSidebar( ... The headerPanel() argument is certainly the simplest component, since usually only the title of the app will be stored in it. In our ui.R file, it is just a single line of code: library(shiny) shinyUI(pageWithSidebar( titlePanel("My First Shiny App"), ... The sidebarPanel() function defines the look of the sidebar, and most importantly, handles the input of the variables of the chosen mtcars dataset: library(shiny) shinyUI(pageWithSidebar( titlePanel("My First Shiny App"), sidebarPanel( selectInput(inputId = "variable", label = "Variable:", choices = c ("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), ... Finally, the mainPanel() function ensures that the output is displayed. In our case, this is the histogram and the data summary for the selected variables: library(shiny) shinyUI(pageWithSidebar( titlePanel("My First Shiny App"), sidebarPanel( selectInput(inputId = "variable", label = "Variable:", choices = c ("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), mainPanel( plotOutput("carsPlot"), verbatimTextOutput ("carsSummary") ) )) The server file in detail While the ui.R file defines the look of the app, the server.R file holds instructions for the execution of the R code. Again, we use our first app to deconstruct the related server.R file into its main important parts. After loading the needed libraries, datasets, and further scripts, the function, shinyServer(function(input, output) {} ), defines the server logic: library(shiny) library(datasets) shinyServer(function(input, output) { The marked lines of code that follow translate the inputs of the ui.R file into matching outputs. In our case, the server side output$ object is assigned to carsPlot, which in turn was called in the mainPanel() function of the ui.R file as plotOutput(). Moreover, the render* function, in our example it is renderPlot(), reflects the type of output. Of course, here it is the histogram plot. Within the renderPlot() function, you can recognize the input$ object assigned to the variables that were defined in the user interface file: library(shiny) library(datasets) shinyServer(function(input, output) { output$carsPlot <- renderPlot({ hist(mtcars[,input$variable], main = "Histogram of mtcars variables", xlab = input$variable) }) ... In the following lines, you will see another type of the render function, renderPrint() , and within the curly braces, the actual R function, summary(), with the defined input variable: library(shiny) library(datasets) shinyServer(function(input, output) { output$carsPlot <- renderPlot({ hist(mtcars[,input$variable], main = "Histogram of mtcars variables", xlab = input$variable) }) output$carsSummary <- renderPrint({ summary(mtcars[,input$variable]) }) }) There are plenty of different render functions. The most used are as follows: renderPlot: This creates normal plots renderPrin: This gives printed output types renderUI: This gives HTML or Shiny tag objects renderTable: This gives tables, data frames, and matrices renderText: This creates character strings Every code outside the shinyServer() function runs only once on the first launch of the app, while all the code in between the brackets and before the output functions runs as often as a user visits or refreshes the application. The code within the output functions runs every time a user changes the widget that belongs to the corresponding output. The connection between the server and the ui file As already inspected in our decomposed Shiny app, the input functions of the ui.R file are linked with the output functions of the server file. The following figure illustrates this again: The concept of reactivity Shiny uses a reactive programming model, and this is a big deal. By applying reactive programming, the framework is able to be fast, efficient, and robust. Briefly, changing the input in the user interface, Shiny rebuilds the related output. Shiny uses three reactive objects: Reactive source Reactive conductor Reactive endpoint For simplicity, we use the formal signs of the RStudio documentation: The implementation of a reactive source is the reactive value; that of a reactive conductor is a reactive expression; and the reactive endpoint is also called the observer. The source and endpoint structure As taught in the previous section, the defined input of the ui.R links is the output of the server.R file. For simplicity, we use the code from our first Shiny app again, along with the introduced formal signs: ... output$carsPlot <- renderPlot({ hist(mtcars[,input$variable], main = "Histogram of mtcars variables", xlab = input$variable) }) ... The input variable, in our app these are the Horsepower; Miles per Gallon, and Number of Carburetors choices, represents the reactive source. The histogram called carsPlot stands for the reactive endpoint. In fact, it is possible to link the reactive source to numerous reactive endpoints, and also conversely. In our Shiny app, we also connected the input variable to our first and second output—carsSummary: ... output$carsPlot <- renderPlot({ hist(mtcars[,input$variable], main = "Histogram of mtcars variables", xlab = input$variable) }) output$carsSummary <- renderPrint({ summary(mtcars[,input$variable]) }) ... To sum it up, this structure ensures that every time a user changes the input, the output refreshes automatically and accordingly. The purpose of the reactive conductor The reactive conductor differs from the reactive source and the endpoint is so far that this reactive type can be dependent and can have dependents. Therefore, it can be placed between the source, which can only have dependents and the endpoint, which in turn can only be dependent. The primary function of a reactive conductor is the encapsulation of heavy and difficult computations. In fact, reactive expressions are caching the results of these computations. The following graph displays a possible connection of the three reactive types: In general, reactivity raises the impression of a logically working directional system; after input, the output occurs. You get the feeling that an input pushes information to an output. But this isn't the case. In reality, it works vice versa. The output pulls the information from the input. And this all works due to sophisticated server logic. The input sends a callback to the server, which in turn informs the output that pulls the needed value from the input and shows the result to the user. But of course, for a user, this all feels like an instant updating of any input changes, and overall, like a responsive app's behavior. Of course, we have just touched upon the main aspects of reactivity, but now you know what's really going on under the hood of Shiny. Discovering the scope of the Shiny user interface After you know how to build a simple Shiny application, as well as how reactivity works, let us take a look at the next step: the various resources to create a custom user interface. Furthermore, there are nearly endless possibilities to shape the look and feel of the layout. As already mentioned, the entire HTML, CSS, and JavaScript logic and functions of the layout options are based on the highly flexible bootstrap framework. And, of course, everything is responsive by default, which makes it possible for the final application layout to adapt to the screen of any device. Exploring the Shiny interface layouts Currently, there are four common shinyUI () page layouts: pageWithSidebar() fluidPage() navbarPage() fixedPage() These page layouts can be, in turn, structured with different functions for a custom inner arrangement structure of the page layout. In the following sections, we are introducing the most useful inner layout functions. As an example, we will use our first Shiny application again. The sidebar layout The sidebar layout, where the sidebarPanel() function is used as the input area, and the mainPanel() function as the output, just like in our first Shiny app. The sidebar layout uses the pageWithSidebar() function: library(shiny) shinyUI(pageWithSidebar( headerPanel("The Sidebar Layout"), sidebarPanel( selectInput(inputId = "variable", label = "This is the sidebarPanel", choices = c("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), mainPanel( tags$h2("This is the mainPanel"), plotOutput("carsPlot"), verbatimTextOutput("carsSummary") ) )) When you only change the first three functions, you can create exactly the same look as the application with the fluidPage() layout. This is the sidebar layout with the fluidPage() function: library(shiny) shinyUI(fluidPage( titlePanel("The Sidebar Layout"), sidebarLayout( sidebarPanel( selectInput(inputId = "variable", label = "This is the sidebarPanel", choices = c("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), mainPanel( tags$h2("This is the mainPanel"), plotOutput("carsPlot"), verbatimTextOutput("carsSummary") ) ) ))   The grid layout The grid layout is where rows are created with the fluidRow() function. The input and output are made within free customizable columns. Naturally, a maximum of 12 columns from the bootstrap grid system must be respected. This is the grid layout with the fluidPage () function and a 4-8 grid: library(shiny) shinyUI(fluidPage( titlePanel("The Grid Layout"), fluidRow( column(4, selectInput(inputId = "variable", label = "Four-column input area", choices = c("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), column(8, tags$h3("Eight-column output area"), plotOutput("carsPlot"), verbatimTextOutput("carsSummary") ) ) )) As you can see from inspecting the previous ui.R file, the width of the columns is defined within the fluidRow() function, and the sum of these two columns adds up to 12. Since the allocation of the columns is completely flexible, you can also create something like the grid layout with the fluidPage() function and a 4-4-4 grid: library(shiny) shinyUI(fluidPage( titlePanel("The Grid Layout"), fluidRow( column(4, selectInput(inputId = "variable", label = "Four-column input area", choices = c("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), column(4, tags$h5("Four-column output area"), plotOutput("carsPlot") ), column(4, tags$h5("Another four-column output area"), verbatimTextOutput("carsSummary") ) ) )) The tabset panel layout The tabsetPanel() function can be built into the mainPanel() function of the aforementioned sidebar layout page. By applying this function, you can integrate several tabbed outputs into one view. This is the tabset layout with the fluidPage() function and three tab panels: library(shiny) shinyUI(fluidPage( titlePanel("The Tabset Layout"), sidebarLayout( sidebarPanel( selectInput(inputId = "variable", label = "Select a variable", choices = c("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), mainPanel( tabsetPanel( tabPanel("Plot", plotOutput("carsPlot")), tabPanel("Summary", verbatimTextOutput("carsSummary")), tabPanel("Raw Data", dataTableOutput("tableData")) ) ) ) )) After changing the code to include the tabsetPanel() function, the three tabs with the tabPanel() function display the respective output. With the help of this layout, you are no longer dependent on representing several outputs among themselves. Instead, you can display each output in its own tab, while the sidebar does not change. The position of the tabs is flexible and can be assigned to be above, below, right, and left. For example, in the following code file detail, the position of the tabsetPanel() function was assigned as follows: ... mainPanel( tabsetPanel(position = "below", tabPanel("Plot", plotOutput("carsPlot")), tabPanel("Summary", verbatimTextOutput("carsSummary")), tabPanel("Raw Data", tableOutput("tableData")) ) ) ... The navlist panel layout The navlistPanel() function is similar to the tabsetPanel() function, and can be seen as an alternative if you need to integrate a large number of tabs. The navlistPanel() function also uses the tabPanel() function to include outputs: library(shiny) shinyUI(fluidPage( titlePanel("The Navlist Layout"), navlistPanel( "Discovering The Dataset", tabPanel("Plot", plotOutput("carsPlot")), tabPanel("Summary", verbatimTextOutput("carsSummary")), tabPanel("Another Plot", plotOutput("barPlot")), tabPanel("Even A Third Plot", plotOutput("thirdPlot"), "More Information", tabPanel("Raw Data", tableOutput("tableData")), tabPanel("More Datatables", tableOutput("moreData")) ) ))   The navbar page as the page layout In the previous examples, we have used the page layouts, fluidPage() and pageWithSidebar(), in the first line. But, especially when you want to create an application with a variety of tabs, sidebars, and various input and output areas, it is recommended that you use the navbarPage() layout. This function makes use of the standard top navigation of the bootstrap framework: library(shiny) shinyUI(navbarPage("The Navbar Page Layout", tabPanel("Data Analysis", sidebarPanel( selectInput(inputId = "variable", label = "Select a variable", choices = c("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), mainPanel( plotOutput("carsPlot"), verbatimTextOutput("carsSummary") ) ), tabPanel("Calculations" … ), tabPanel("Some Notes" … ) )) Adding widgets to your application After inspecting the most important page layouts in detail, we now look at the different interface input and output elements. By adding these widgets, panels, and other interface elements to an application, we can further customize each page layout. Shiny input elements Already, in our first Shiny application, we got to know a typical Shiny input element: the selection box widget. But, of course, there are a lot more widgets with different types of uses. All widgets can have several arguments; the minimum setup is to assign an inputId, which instructs the input slot to communicate with the server file, and a label to communicate with a widget. Each widget can also have its own specific arguments. As an example, we are looking at the code of a slider widget. In the previous screenshot are two versions of a slider; we took the slider range for inspection: sliderInput(inputId = "sliderExample", label = "Slider range", min = 0, max = 100, value = c(25, 75)) Besides the mandatory arguments, inputId and label, three more values have been added to the slider widget. The min and max arguments specify the minimum and maximum values that can be selected. In our example, these are 0 and 100. A numeric vector was assigned to the value argument, and this creates a double-ended range slider. This vector must logically be within the set minimum and maximum values. Currently, there are more than twenty different input widgets, which in turn are all individually configurable by assigning to them their own set of arguments. A brief overview of the output elements As we have seen, the output elements in the ui.R file are connected to the rendering functions in the server file. The mainly used output elements are: htmlOutput imageOutput plotOutput tableOutput textOutput verbatimTextOutput downloadButton Due to their unambiguous naming, the purpose of these elements should be clear. Individualizing your app even further with Shiny tags Although you don't need to know HTML to create stunning Shiny applications, you have the option to create highly customized apps with the usage of raw HTML or so-called Shiny tags. To add raw HTML, you can use the HTML() function. We will focus on Shiny tags in the following list. Currently, there are over a 100 different Shiny tag objects, which can be used to add text styling, colors, different headers, visual and audio, lists, and many more things. You can use these tags by writing tags $tagname. Following is a brief list of useful tags: tags$h1: This is first level header; of course you can also use the known h1 -h6 tags$hr: This makes a horizontal line, also known as a thematic break tags$br: This makes a line break, a popular way to add some space tags$strong = This makes the text bold tags$div: This makes a division of text with a uniform style tags$a: This links to a webpage tags$iframe: This makes an inline frame for embedding possibilities The following ui.R file and corresponding screenshot show the usage of Shiny tags by an example: shinyUI(fluidPage( fluidRow( column(6, tags$h3("Customize your app with Shiny tags!"), tags$hr(), tags$a(href = "http://www.rstudio.com", "Click me"), tags$hr() ), column(6, tags$br(), tags$em("Look - the R project logo"), tags$br(), tags$img(src = "http://www.r-project.org/Rlogo.png") ) ), fluidRow( column(6, tags$strong("We can even add a video"), tags$video(src = "video.mp4", type = "video/mp4", autoplay = NA, controls = NA) ), column(6, tags$br(), tags$ol( tags$li("One"), tags$li("Two"), tags$li("Three")) ) ) ))   Creating dynamic user interface elements We know how to build completely custom user interfaces with all the bells and whistles. But all the introduced types of interface elements are fixed and static. However, if you need to create dynamic interface elements, Shiny offers three ways to achieve it: The conditinalPanel() function: The renderUI() function The use of directly injected JavaScript code In the following section, we only show how to use the first two ways, because firstly, they are built into the Shiny package, and secondly, the JavaScript method is indicated as experimental. Using conditionalPanel The condtionalPanel() functions allow you to show or hide interface elements dynamically, and is set in the ui.R file. The dynamic of this function is achieved by JavaScript expressions, but as usual in the Shiny package, all you need to know is R programming. The following example application shows how this function works for the ui.R file: library(shiny) shinyUI(fluidPage( titlePanel("Dynamic Interface With Conditional Panels"), column(4, wellPanel( sliderInput(inputId = "n", label= "Number of points:", min = 10, max = 200, value = 50, step = 10) )), column(5, "The plot below will be not displayed when the slider value", "is less than 50.", conditionalPanel("input.n >= 50", plotOutput("scatterPlot", height = 300) ) ) )) The following example application shows how this function works for the Related server.R file: library(shiny) shinyServer(function(input, output) { output$scatterPlot <- renderPlot({ x <- rnorm(input$n) y <- rnorm(input$n) plot(x, y) }) }) The code for this example application was taken from the Shiny gallery of RStudio (http://shiny.rstudio.com/gallery/conditionalpanel-demo.html). As readable in both code files, the defined function, input.n, is the linchpin for the dynamic behavior of the example app. In the conditionalPanel() function, it is defined that inputId="n" must have a value of 50 or higher, while the input and output of the plot will work as already defined. Taking advantage of the renderUI function The renderUI() function is hooked, contrary to the previous model, to the server file to create a dynamic user interface. We have already introduced different render output functions in this article. The following example code shows the basic functionality using the ui.R file: # Partial example taken from the Shiny documentation numericInput("lat", "Latitude"), numericInput("long", "Longitude"), uiOutput("cityControls") The following example code shows the basic functionality of the Related sever.R file: # Partial example output$cityControls <- renderUI({ cities <- getNearestCities(input$lat, input$long) checkboxGroupInput("cities", "Choose Cities", cities) }) As described, the dynamic of this method gets defined in the renderUI() process as an output, which then gets displayed through the uiOutput() function in the ui.R file. Sharing your Shiny application with others Typically, you create a Shiny application not only for yourself, but also for other users. There are a two main ways to distribute your app; either you let users download your application, or you deploy it on the web. Offering a download of your Shiny app By offering the option to download your final Shiny application, other users can run your app locally. Actually, there are four ways to deliver your app this way. No matter which way you choose, it is important that the user has installed R and the Shiny package on his/her computer. Gist Gist is a public code sharing pasteboard from GitHub. To share your app this way, it is important that both the ui.R file and the server.R file are in the same Gist and have been named correctly. Take a look at the following screenshot: There are two options to run apps via Gist. First, just enter runGist("Gist_URL") in the console of RStudio; or second, just use the Gist ID and place it in the shiny::runGist("Gist_ID") function. Gist is a very easy way to share your application, but you need to keep in mind that your code is published on a third-party server. GitHub The next way to enable users to download your app is through a GitHub repository: To run an application from GitHub, you need to enter the command, shiny::runGitHub ("Repository_Name", "GitHub_Account_Name"), in the console: Zip file There are two ways to share a Shiny application by zip file. You can either let the user download the zip file over the web, or you can share it via email, USB stick, memory card, or any other such device. To download a zip file via the Web, you need to type runUrl ("Zip_File_URL") in the console: Package Certainly, a much more labor-intensive but also publically effective way is to create a complete R package for your Shiny application. This especially makes sense if you have built an extensive application that may help many other users. Another advantage is the fact that you can also publish your application on CRAN. Later in the book, we will show you how to create an R package. Deploying your app to the web After showing you the ways users can download your app and run it on their local machines, we will now check the options to deploy Shiny apps to the web. Shinyapps.io http://www.shinyapps.io/ is a Shiny app- hosting service by RStudio. There is a free-to- use account package, but it is limited to a maximum of five applications, 25 so-called active hours, and the apps are branded with the RStudio logo. Nevertheless, this service is a great way to publish one's own applications quickly and easily to the web. To use http://www.shinyapps.io/ with RStudio, a few R packages and some additional operating system software needs to be installed: RTools (If you use Windows) GCC (If you use Linux) XCode Command Line Tools (If you use Mac OS X) The devtools R package The shinyapps package Since the shinyapps package is not on CRAN, you need to install it via GitHub by using the devtools package: if (!require("devtools")) install.packages("devtools") devtools::install_github("rstudio/shinyapps") library(shinyapps) When everything that is needed is installed ,you are ready to publish your Shiny apps directly from the RStudio IDE. Just click on the Publish icon, and in the new window you will need to log in to your http://www.shinyapps.io/ account once, if you are using it for the first time. All other times, you can directly create a new Shiny app or update an existing app: After clicking on Publish, a new tab called Deploy opens in the console pane, showing you the progress of the deployment process. If there is something set incorrectly, you can use the deployment log to find the error: When the deployment is successful, your app will be publically reachable with its own web address on http://www.shinyapps.io/.   Setting up a self-hosted Shiny server There are two editions of the Shiny Server software: an open source edition and the professional edition. The open source edition can be downloaded for free and you can use it on your own server. The Professional edition offers a lot more features and support by RStudio, but is also priced accordingly. Diving into the Shiny ecosystem Since the Shiny framework is such an awesome and powerful tool, a lot of people, and of course, the creators of RStudio and Shiny have built several packages around it that are enormously extending the existing functionalities of Shiny. These almost infinite possibilities of technical and visual individualization, which are possible by deeply checking the Shiny ecosystem, would certainly go beyond the scope of this article. Therefore, we are presenting only a few important directions to give a first impression. Creating apps with more files In this article, you have learned how to build a Shiny app consisting of two files: the server.R and the ui.R. To include every aspect, we first want to point out that it is also possible to create a single file Shiny app. To do so, create a file called app.R. In this file, you can include both the server.R and the ui.R file. Furthermore, you can include global variables, data, and more. If you build larger Shiny apps with multiple functions, datasets, options, and more, it could be very confusing if you do all of it in just one file. Therefore, single-file Shiny apps are a good idea for simple and small exhibition apps with a minimal setup. Especially for large Shiny apps, it is recommended that you outsource extensive custom functions, datasets, images, and more into your own files, but put them into the same directory as the app. An example file setup could look like this: ~/shinyapp |-- ui.R |-- server.R |-- helper.R |-- data |-- www |-- js |-- etc   To access the helper file, you just need to add source("helpers.R") into the code of your server.R file. The same logic applies to any other R files. If you want to read in some data from your data folder, you store it in a variable that is also in the head of your server.R file, like this: myData &lt;- readRDS("data/myDataset.rds") Expanding the Shiny package As said earlier, you can expand the functionalities of Shiny with several add-on packages. There are currently ten packages available on CRAN with different inbuilt functions to add some extra magic to your Shiny app. shinyAce: This package makes available Ace editor bindings to enable a rich text-editing environment within Shiny. shinybootstrap2: The latest Shiny package uses bootstrap 3; so, if you built your app with bootstrap 2 features, you need to use this package. shinyBS: This package adds the additional features of the original Twitter Bootstraptheme, such as tooltips, modals, and others, to Shiny. shinydashboard: This packages comes from the folks at RStudio and enables the user to create stunning and multifunctional dashboards on top of Shiny. shinyFiles: This provides functionality for client-side navigation of the server side file system in Shiny apps. shinyjs: By using this package, you can perform common JavaScript operations in Shiny applications without having to know any JavaScript. shinyRGL: This package provides Shiny wrappers for the RGL package. This package exposes RGL's ability to export WebGL visualization in a shiny-friendly format. shinystan: This package is, in fact, not a real add-on. Shinystan is a fantastic full-blown Shiny application to give users a graphical interface for Markov chain Monte Carlo simulations. shinythemes: This packages gives you the option of changing the whole look and feel of your application by using different inbuilt bootstrap themes. shinyTree: This exposes bindings to jsTree—a JavaScript library that supports interactive trees—to enable rich, editable trees in Shiny. Of course, you can find a bunch of other packages with similar or even more functionalities, extensions, and also comprehensive Shiny apps on GitHub. Summary To learn more about Shiny, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Learning Shiny (https://www.packtpub.com/application-development/learning-shiny) Mastering Machine Learning with R (https://www.packtpub.com/big-data-and-business-intelligence/mastering-machine-learning-r) Mastering Data Analysis with R (https://www.packtpub.com/big-data-and-business-intelligence/mastering-data-analysis-r)
Read more
  • 0
  • 0
  • 15036

article-image-queues-and-topics
Packt
10 Jul 2017
8 min read
Save for later

Queues and topics

Packt
10 Jul 2017
8 min read
In this article by Luca Stancapiano, the author of the book Mastering Java EE Development with WildFly, we will see how to implement Java Message Service (JMS) in a queue channel using WildFly console. (For more resources related to this topic, see here.) JMS works inside channels of messages that manage the messages asynchronously. These channels contain messages that they will collect or remove according the configuration and the type of channel. These channels are of two types, queues and topics. These channels are highly configurable by the WildFly console. As for all components in WildFly they can be installed through the console command line or directly with maven plugins of the project.  In the next two paragraphs we will show what do they mean and all the possible configurations. Queues Queues collect the sent messages that are waiting to be read. The messages are delivered in the order they are sent and when beds are removed from the queue. Create the queue from the web console See now the steps to create a new queue through the web console. Connect to http://localhost:9990/.  Go in Configuration | Subsystems/Messaging - ActiveMQ/default. And click on Queues/Topics. Now select the Queues menu and click on the Add button. You will see this screen: The parameters to insert are as follows: Name:  The name of the queue. JNDI Names: The jndi names the queue will be bound to. Durable?: Whether the queue is durable or not. Selector: The queue selector.     As for all enterprise components, JMS components are callable through Java Naming Directory Interface (JNDI).  Durable queues keep messages around persistently for any suitable consumer to consume them. Durable queues do not need to concern themselves with which consumer is going to consume the messages at some point in the future. There is just one copy of a message that any consumer in the future can consume. Message Selectors allows to filter the messages that a Message Consumer will receive. The filter is a relatively complex language similar to the syntax of an SQL WHERE clause. The selector can use all the message headers and properties for filtering operations, but cannot use the message content.Selectors are mostly useful for channels that broadcast a very large number of messages to its subscribers. On Queues, only messages that match the selector will be returned. Others stay in the queue (and thus can be read by a MessageConsumer with different selector). The following SQL elements are allowed in our filters and we can put them in the Selector field of the form:  Element  Description of the Element  Example of Selectors  AND, OR, NOT  Logical operators  (releaseYear < 1986) ANDNOT (title = 'Bad')  String Literals  String literals in single quotes, duplicate to escape  title = 'Tom''s'  Number Literals  Numbers in Java syntax. They can be double or integer  releaseYear = 1982  Properties  Message properties that follow Java identifier naming  releaseYear = 1983  Boolean Literals  TRUE and FALSE  isAvailable = FALSE  ( )  Round brackets  (releaseYear < 1981) OR (releaseYear > 1990) BETWEEN Checks whether number is in range (both numbers inclusive) releaseYear BETWEEN 1980 AND 1989 Header Fields Any headers except JMSDestination, JMSExpiration and JMSReplyTo JMSPriority = 10 =, <>, <, <=, >, >= Comparison operators (releaseYear < 1986) AND (title <> 'Bad') LIKE String comparison with wildcards '_' and '%' title LIKE 'Mirror%' IN Finds value in set of strings title IN ('Piece of mind', 'Somewhere in time', 'Powerslave') IS NULL, IS NOT NULL Checks whether value is null or not null. releaseYear IS NULL *, +, -, / Arithmetic operators releaseYear * 2 > 2000 - 18 Fill the form now: In this article we will implement a messaging service to send coordinates of the bus means . The queue is created and showed in the queues list: Create the queue using CLI and Maven WildFly plugin The same thing can be done with the Command Line Interface (CLI). So start a WildFly instance, go in the bin directory of WildFly and execute the following script: bash-3.2$ ./jboss-cli.sh You are disconnected at the moment. Type 'connect' to connect to the server or 'help' for the list of supported commands. [disconnected /] connect [standalone@localhost:9990 /] /subsystem=messagingactivemq/ server=default/jmsqueue= gps_coordinates:add(entries=["java:/jms/queue/GPS"]) {"outcome" => "success"} The same thing can be done through maven. Simply add this snippet in your pom.xml: <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>1.0.2.Final</version> <executions> <execution> <id>add-resources</id> <phase>install</phase> <goals> <goal>add-resource</goal> </goals> <configuration> <resources> <resource> <address>subsystem=messaging-activemq,server=default,jmsqueue= gps_coordinates</address> <properties> <durable>true</durable> <entries>!!["gps_coordinates", "java:/jms/queue/GPS"]</entries> </properties> </resource> </resources> </configuration> </execution> <execution> <id>del-resources</id> <phase>clean</phase> <goals> <goal>undeploy</goal> Queues and topics [ 7 ] </goals> <configuration> <afterDeployment> <commands> <command>/subsystem=messagingactivemq/ server=default/jms-queue=gps_coordinates:remove </command> </commands> </afterDeployment> </configuration> </execution> </executions> </plugin> The Maven WildFly plugin lets you to do admin operations in WildFly using the same custom protocol used by command line. Two executions are configured: add-resources: It hooks the install maven scope and it adds the queue passing the name, JNDI and durable parameters seen in the previous paragraph. del-resources: It hooks the clean maven scope and remove the chosen queue by name. Create the queue through an Arquillian test case Or we can add and remove the queue through an Arquillian test case: @RunWith(Arquillian.class) @ServerSetup(MessagingResourcesSetupTask.class) public class MessageTestCase { ... private static final String QUEUE_NAME = "gps_coordinates"; private static final String QUEUE_LOOKUP = "java:/jms/queue/GPS"; static class MessagingResourcesSetupTask implements ServerSetupTask { @Override public void setup(ManagementClient managementClient, String containerId) throws Exception { getInstance(managementClient.getControllerClient()).createJmsQueue(QUEUE_NA ME, QUEUE_LOOKUP); } @Override public void tearDown(ManagementClient managementClient, String containerId) throws Exception { getInstance(managementClient.getControllerClient()).removeJmsQueue(QUEUE_NA ME); } } Queues and topics [ 8 ] ... } The Arquillian org.jboss.as.arquillian.api.ServerSetup annotation let to use an external setup manager used to install or remove new components inside WildFly. In this case we are installing the queue declared with the two variables QUEUE_NAME and QUEUE_LOOKUP. When the test ends, automatically the tearDown method will be started and it will remove the installed queue. To use Arquillian it's important add the WildFly testsuite dependency in your pom.xml project: ... <dependencies> <dependency> <groupId>org.wildfly</groupId> <artifactId>wildfly-testsuite-shared</artifactId> <version>10.1.0.Final</version> <scope>test</scope> </dependency> </dependencies> ... Going in the standalone-full.xml we will find the created queue as: <subsystem > <server name="default"> ... <jms-queue name="gps_coordinates" entries="java:/jms/queue/GPS"/> ... </server> </subsystem> JMS is available in the standalone-full configuration. By default WildFly supports 4 standalone configurations. They can be found in the standalone/configuration directory: standalone.xml: It supports all components except the messaging and corba/iiop standalone-full.xml: It supports all components standalone-ha.xml: It supports all components except the messaging and corba/iiop with the enabled cluster standalone-full-ha.xml: It supports all components with the enabled cluster To start WildFly with the chosen configuration simply add a -c with the configuration in the standalone.sh script. Here a sample to start the standalone full configuration: ./standalone.sh -c standalone-full.xml Create the java client for the queue See now how create a client to send a message to the queue. JMS 2.0 simplify very much the creation of clients. Here a sample of a client inside a stateless Enterprise Java Beans (EJB): @Stateless public class MessageQueueSender { @Inject private JMSContext context; @Resource(mappedName = "java:/jms/queue/GPS") private Queue queue; public void sendMessage(String message) { context.createProducer().send(queue, message); } } The javax.jms.JMSContext is injectable from any EE component. We will see the JMS context in details in the next paragraph The JMS Context. The queue is represented in JMS by the javax.jms.Queue class. It can be injected as JNDI resource through the @Resource annotation. The JMS context through the createProducer method creates a producer represented by the javax.jms.JMSProducer class used to send the messages. We can now create a client injecting the stateless and sending a string message hello! ... @EJB private MessageQueueSender messageQueueSender; ... messageQueueSender.sendMessage("hello!"); Summary In this article we have seen how to implement Java Message Service in a queue channel using web console, Command Line Interface and Maven WildFly plugins, Arquillian test cases and how to create Java clients for queue. Resources for Article: Further resources on this subject: WildFly – the Basics [article] WebSockets in Wildfly [article] Creating Java EE Applications [article]
Read more
  • 0
  • 0
  • 15006

article-image-rapid-application-development-django-openduty-story
Bálint Csergő
01 Aug 2016
5 min read
Save for later

Rapid Application Development with Django, the Openduty story

Bálint Csergő
01 Aug 2016
5 min read
Openduty is an open source incident escalation tool, which is something like Pagerduty but free and much simpler. It was born during a hackathon at Ustream back in 2014. The project received a lot of attention in the devops community, and was also featured in Devops weekly andPycoders weekly.It is listed at Full Stack Python as an example Django project. This article is going to include some design decisions we made during the hackathon, and detail some of the main components of the Opendutysystem. Design When we started the project, we already knew what we wanted to end up with: We had to work quickly—it was a hackathon after all An API similar to Pagerduty Ability to send notifications asynchronously A nice calendar to organize on—call schedules can’t hurt anyone, right? Tokens for authorizing notifiers So we chose the corresponding components to reach our goal. Get the job done quickly If you have to develop apps rapidly in Python, Django is the framework you choose. It's a bit heavyweight, but hey, it gives you everything you need and sometimes even more. Don't get me wrong; I'm a big fan of Flask also, but it can be a bit fiddly to assemble everything by hand at the start. Flask may pay off later, and you may win on a lower amount of dependencies, but we only had 24 hours, so we went with Django. An API When it comes to Django and REST APIs, one of the GOTO soluitions is The Django REST Framework. It has all the nuts and bolts you'll need when you're assembling an API, like serializers, authentication, and permissions. It can even give you the possibility to make all your API calls self-describing. Let me show you howserializers work in the Rest Framework. class OnCallSerializer(serializers.Serializer): person = serializers.CharField() email = serializers.EmailField() start = serializers.DateTimeField() end = serializers.DateTimeField() The code above represents a person who is on-call on the API. As you can see, it is pretty simple; you just have to define the fields. It even does the validation for you, since you have to give a type to every field. But believe me, it's capable of more good things like generating a serializer from your Django model: class SchedulePolicySerializer(serializers.HyperlinkedModelSerializer): rules = serializers.RelatedField(many=True, read_only=True) class Meta: model = SchedulePolicy fields = ('name', 'repeat_times', 'rules') This example shows how you can customize a ModelSerializer, make fields read-only, and only accept given fields from an API call. Async Task Execution When you have tasks that are long-running, such as generating huge reports, resizing images, or even transcoding some media, it is a common practice thatyou must move the actual execution of those out of your webapp into a separate layer. This decreases the load on the webservers, helps in avoiding long or even timing out requests, and just makes your app more resilient and scalable. In the Python world, the go-to solution for asynchronous task execution is called Celery. In Openduty, we use Celery heavily to send notifications asynchronously and also to delay the execution of any given notification task by the delay defined in the service settings. Defining a task is this simple: @app.task(ignore_result=True) def send_notifications(notification_id): try: notification = ScheduledNotification.objects.get(id = notification_id) if notification.notifier == UserNotificationMethod.METHOD_XMPP: notifier = XmppNotifier(settings.XMPP_SETTINGS) #choosing notifier removed from example code snippet notifier.notify(notification) #logging task result removed from example snippet raise And calling an already defined task is also almost as simple as calling any regular function: send_notifications.apply_async((notification.id,) ,eta=notification.send_at) This means exactly what you think: Send the notification with the id: notification.id at notification.send_at. But how do these things get executed? Under the hood, Celery wraps your decorated functions so that when you call them, they get enqueued instead of being executed directly. When the celery worker detects that there is a task to be executed, it simply takes it from the queue and executes it asynchronously. Calendar We use django-scheduler for the awesome-looking calendar in Openduty. It is a pretty good project generally, supports recurring events, and provides you with a UI for your calendar, so you won't even have to fiddle with that. Tokens and Auth Service token implementation is a simple thing. You want them to be unique, and what else would you choose if not aUUID? There is a nice plugin for Django models used to handle UUID fields, called django-uuidfield. It just does what it says—addingUUIDField support to your models. User authentication is a bit more interesting, so we currently support plain Django Users, and you can use LDAP as your user provider. Summary This was just a short summary about the design decisions made when we coded Openduty. I also demonstrated the power of the components through some snippets that are relevant. If you are on a short deadline, consider using Django and its extensions. There is a good chance that somebody has already done what you need to do, or something similar, which can always be adapted to your needs thanks to the awesome power of the open source community. About the author BálintCsergő is a software engineer from Budapest, currently working as an infrastructure engineer at Hortonworks. He lovesUnix systems, PHP, Python, Ruby, the Oracle database, Arduino, Java, C#, music, and beer.
Read more
  • 0
  • 0
  • 14960

article-image-how-create-new-jsf-project
Packt
20 Jun 2011
17 min read
Save for later

How to Create a New JSF Project

Packt
20 Jun 2011
17 min read
  Java EE 6 Development with NetBeans 7 Develop professional enterprise Java EE applications quickly and easily with this popular IDE       Introduction to JavaServer faces Before JSF existed, most Java web applications were typically developed using non-standard web application frameworks such as Apache Struts, Tapestry, Spring Web MVC, or many others. These frameworks are built on top of the Servlet and JSP standards, and automate a lot of functionality that needs to be manually coded when using these APIs directly. Having a wide variety of web application frameworks available, often resulted in "analysis paralysis", that is, developers often spend an inordinate amount of time evaluating frameworks for their applications. The introduction of JSF to the Java EE specification resulted in having a standard web application framework available in any Java EE compliant application server. We don't mean to imply that other web application frameworks are obsolete or that they shouldn't be used at all. However, a lot of organizations consider JSF the "safe" choice since it is part of the standard and should be well supported for the foreseeable future. Additionally, NetBeans offers excellent JSF support, making JSF a very attractive choice. Strictly speaking, JSF is not a web application framework per se, but a component framework. In theory, JSF can be used to write applications that are not web-based, however, in practice JSF is almost always used for this purpose. In addition to being the standard Java EE component framework, one benefit of JSF is that it provides good support for tools vendors, allowing tools such as NetBeans to take advantage of the JSF component model with drag and drop support for components.   Developing our first JSF application From an application developer's point of view, a JSF application consists of a series of XHTML pages containing custom JSF tags, one or more JSF managed beans, and an optional configuration file named faces-config.xml. faces-config.xml used to be required in JSF 1.x, however, in JSF 2.0, some conventions were introduced that reduce the need for configuration. Additonally, a lot of JSF configuration can be specified using annotations, reducing, and in some cases, eliminating the need for this XML configuration file. Creating a new JSF project To create a new JSF project, we need to go to File | New Project, select the Java Web project category, and Web Application as the project type. After clicking Next>, we need to enter a project name, and optionally change other information for our project, although NetBeans provides sensible defaults. On the next page in the wizard, we can select the server, Java EE version, and context path of our application. In our example we will simply pick the default values. On the next page of the new project wizard, we can select what frameworks our web application will use. Unsurprisingly, for JSF applications we need to select the JavaServer Faces framework. When clicking on Finish, the wizard generates a skeleton JSF project for us, consisting of a single facelet file called index.xhtml, a web.xml configuration file. web.xml is the standard, optional configuration file needed for Java web applications, this file became optional in version 3.0 of the Servlet API, which was introduced with Java EE 6. In many cases, web.xml is not needed anymore, since most of the configuration options can now be specified via annotations. For JSF applications, however, it is a good idea to add one, since it allows us to specify the JSF project stage. <?xml version="1.0" encoding="UTF-8"?> <web-app version="3.0" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <context-param> <param-name>javax.faces.PROJECT_STAGE</param-name> <param-value>Development</param-value> </context-param> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>/faces/*</url-pattern> </servlet-mapping> <session-config> <session-timeout> 30 </session-timeout> </session-config> <welcome-file-list> <welcome-file>faces/index.xhtml</welcome-file> </welcome-file-list> As we can see, NetBeans automatically sets the JSF project stage to Development, setting the project stage to development configures JSF to provide additional debugging help not present in other stages. For example, one common problem when developing a page is that while a page is being developed, validation for one or more of the fields on the page fails, but the developer has not added an <h:message> or <h:messages> tag to the page. When this happens and the form is submitted, the page seems to do nothing, or page navigation doesn't seem to be working. When setting the project stage to Development, these validation errors will automatically be added to the page, without the developer having to explicitly add one of these tags to the page (we should, of course, add the tags before releasing our code to production, since our users will not see the automatically generated validation errors). The following are the valid values for the javax.faces.PROJECT_STAGE context parameter for the faces servlet: Development Production SystemTest UnitTest The Development project stage adds additional debugging information to ease development. The Production project stage focuses on performance. The other two valid values for the project stage (SystemTest and UnitTest), allow us to implement our own custom behavior for these two phases. The javax.faces.application.Application class has a getProjectStage() method that allows us to obtain the current project stage. Based on the value of this method, we can implement the code that will only be executed in the appropriate stage. The following code snippet illustrates this: public void someMethod() { FacesContext facesContext = FacesContext.getCurrentInstance(); Application application = facesContext.getApplication(); ProjectStage projectStage = application.getProjectStage(); if (projectStage.equals(ProjectStage.Development)) { //do development stuff } else if (projectStage.equals(ProjectStage.Production)) { //do production stuff } else if (projectStage.equals(ProjectStage.SystemTest)) { // do system test stuff } else if (projectStage.equals(ProjectStage.UnitTest)) { //do unit test stuff } } As illustrated in the snippet above, we can implement the code to be executed in any valid project stage, based on the return value of the getProjectStage() method of the Application class. When creating a Java Web project using JSF, a facelet is automatically generated. The generated facelet file looks like this: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <h:head> <title>Facelet Title</title> </h:head> <h:body> Hello from Facelets </h:body> </html> As we can see, a facelet is nothing but an XHTML file using some facelets-specific XML name spaces. In the automatically generated page above, the following namespace definition allows us to use the "h" (for HTML) JSF component library: The above namespace declaration allows us to use JSF specific tags such as <h:head> and <h:body> which are a drop in replacement for the standard HTML/XHTML <head> and <body> tags, respectively. The application generated by the new project wizard is a simple, but complete JSF web application. We can see it in action by right-clicking on our project in the project window and selecting Run. At this point the application server is started (if it wasn't already running), the application is deployed and the default system browser opens, displaying our application's default page. Modifying our page to capture user data The generated application, of course, is nothing but a starting point for us to create a new application. We will now modify the generated index.xhtml file to collect some data from the user. The first thing we need to do is add an <h:form> tag to our page. The <h:form> tag is equivalent to the <form> tag in standard HTML pages. After typing the first few characters of the <h:form> tag into the page, and hitting Ctrl+Space, we can take advantage of NetBeans' excellent code completion. After adding the <h:form> tag and a number of additional JSF tags, our page now looks like this: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <h:head> <title>Registration</title> <h:outputStylesheet library="css" name="styles.css"/> </h:head> <h:body> <h3>Registration Page</h3> <h:form> <h:panelGrid columns="3" columnClasses="rightalign,leftalign,leftalign"> <h:outputLabel value="Salutation: " for="salutation"/> <h:selectOneMenu id="salutation" label="Salutation" value="#{registrationBean.salutation}" > <f:selectItem itemLabel="" itemValue=""/> <f:selectItem itemLabel="Mr." itemValue="MR"/> <f:selectItem itemLabel="Mrs." itemValue="MRS"/> <f:selectItem itemLabel="Miss" itemValue="MISS"/> <f:selectItem itemLabel="Ms" itemValue="MS"/> <f:selectItem itemLabel="Dr." itemValue="DR"/> </h:selectOneMenu> <h:message for="salutation"/> <h:outputLabel value="First Name:" for="firstName"/> <h:inputText id="firstName" label="First Name" required="true" value="#{registrationBean.firstName}" /> <h:message for="firstName" /> <h:outputLabel value="Last Name:" for="lastName"/> <h:inputText id="lastName" label="Last Name" required="true" value="#{registrationBean.lastName}" /> <h:message for="lastName" /> <h:outputLabel for="age" value="Age:"/> <h:inputText id="age" label="Age" size="2" value="#{registrationBean.age}"/> <h:message for="age"/> <h:outputLabel value="Email Address:" for="email"/> <h:inputText id="email" label="Email Address" required="true" value="#{registrationBean.email}"> </h:inputText> <h:message for="email" /> <h:panelGroup/> <h:commandButton id="register" value="Register" action="confirmation" /> </h:panelGrid> </h:form> </h:body> </html> The following screenshot illustrates how our page will be rendered at runtime: All JSF input fields must be inside an <h:form> tag. The <h:panelGrid> helps us to easily lay out JSF tags on our page. It can be thought of as a grid where other JSF tags will be placed. The columns attribute of the <h:panelGrid> tag indicates how many columns the grid will have, each JSF component inside the <h:panelGrid> component will be placed in an individual cell of the grid. When the number of components matching the value of the columns attribute (three in our example) has been placed inside <h:panelGrid>, a new row is automatically started. The following table illustrates how tags will be laid out inside an <h:panelGrid> tag: Each row in our <h:panelGrid> consists of an <h:outputLabel> tag, an input field, and an <h:message> tag. The columnClasses attribute of <h:panelGrid> allows us to assign CSS styles to each column inside the panel grid, its value attribute must consist of a comma separated list of CSS styles (defined in a CSS stylesheet). The first style will be applied to the first column, the second style will be applied to the second column, the third style will be applied to the third column, so on and so forth. Had our panel grid had more than three columns, then the fourth column would have been styled using the first style in the columnClasses attribute, the fifth column would have been styled using the second style in the columnClasses attribute, so on and so forth. If we wish to style rows in an <h:panelGrid>, we can do so with its rowClasses attribute, which works the same way that the columnClasses works for columns. Notice the <h:outputStylesheet> tag inside <h:head> near the top of the page, this is a new tag that was introduced in JSF 2.0. One new feature that JSF 2.0 brings to the table is standard resource directories. Resources such as CSS stylesheets, JavaScript files, images, and so on, can be placed under a top level directory named resources, and JSF tags will have access to those resources automatically. In our NetBeans project, we need to place the resources directory under the Web Pages folder. We then need to create a subdirectory to hold our CSS stylesheet (by convention, this directory should be named css), then we place our CSS stylesheet(s) on this subdirectory. The value of the library attribute in <h:outputStylesheet> must match the directory where our CSS file is located, and the value of its name attribute must match the CSS file name. In addition to CSS files, we should place any JavaScript files in a subdirectory called javascript under the resources directory. The file can then be accessed by the <h:outputScript> tag using "javascript" as the value of its library attribute and the file name as the value of its name attribute. Similarly, images should be placed in a directory called images under the resources directory. These images can then be accessed by the JSF <h:graphicImage> tag, where the value of its library attribute would be "images" and the value of its name attribute would be the corresponding file name. Now that we have discussed how to lay out elements on the page and how to access resources, let's focus our attention on the input and output elements on the page. The <h:outputLabel> tag generates a label for an input field in the form, the value of its for attribute must match the value of the id attribute of the corresponding input field. <h:message> generates an error message for an input field, the value of its for field must match the value of the id attribute for the corresponding input field. The first row in our grid contains an <h:selectOneMenu>. This tag generates an HTML <select> tag on the rendered page. Every JSF tag has an id attribute, the value for this attribute must be a string containing a unique identifier for the tag. If we don't specify a value for this attribute, one will be generated automatically. It is a good idea to explicitly state the ID of every component, since this ID is used in runtime error messages. Affected components are a lot easier to identify if we explicitly set their IDs. When using <h:label> tags to generate labels for input fields, or when using <h:message> tags to generate validation errors, we need to explicitly set the value of the id tag, since we need to specify it as the value of the for attribute of the corresponding <h:label> and <h:message> tags. Every JSF input tag has a label attribute. This attribute is used to generate validation error messages on the rendered page. If we don't specify a value for the label attribute, then the field will be identified in the error message by its ID. Each JSF input field has a value attribute, in the case of <h:selectOneMenu>, this attribute indicates which of the options in the rendered <select> tag will be selected. The value of this attribute must match the value of the itemValue attribute of one of the nested <f:selectItem> tags. The value of this attribute is usually a value binding expression, that means that the value is read at runtime from a JSF managed bean. In our example, the value binding expression #{registrationBean.salutation} is used. What will happen is at runtime JSF will look for a managed bean named registrationBean, and look for an attribute named salutation on this bean, the getter method for this attribute will be invoked, and its return value will be used to determine the selected value of the rendered HTML <select> tag. Nested inside the <h:selectOneMenu> there are a number of <f:selectItem> tags. These tags generate HTML <option> tags inside the HTML <select> tag generated by <h:selectOneMenu>. The value of the itemLabel attribute is the value that the user will see while the value of the itemValue attribute will be the value that will be sent to the server when the form is submitted. All other rows in our grid contain <h:inputText> tags, this tag generates an HTML input field of type text, which accept a single line of typed text as input. We explicitly set the id attribute of all of our <h:inputText> fields, this allows us to refer to them from the corresponding <h:outputLabel> and <h:message> fields. We also set the label attribute for all of our <h:inputText> tags, this results in more user-friendly error messages. Some of our <h:inputText> fields require a value, these fields have their required attribute set to true, each JSF input field has a required attribute, if we need to require the user to enter a value for this attribute, then we need to set this attribute to true. This attribute is optional, if we don't explicitly set a value for it, then it defaults to false. In the last row of our grid, we added an empty <h:panelGroup> tag. The purpose of this tag is to allow adding several tags into a single cell of an <h:panelGrid>. Any tags placed inside this tag are placed inside the same cell of the grid where <h:panelGrid> is placed. In this particular case, all we want to do is to have an "empty" cell in the grid so that the next tag, <h:commandButton>, is aligned with the input fields in the rendered page. <h:commandButton> is used to submit a form to the server. The value of its value attribute is used to generate the text of the rendered button. The value of its action attribute is used to determine what page to display after the button is pressed. In our example, we are using static navigation. When using JSF static navigation, the value of the action attribute of a command button is hard-coded in the markup. When using static navigation, the value of the action attribute of <h:commandButton> corresponds to the name of the page we want to navigate to, minus its .xhtml extension. In our example, when the user clicks on the button, we want to navigate to a file named confirmation.xhtml, therefore we used a value of "confirmation" for its action attribute. An alternative to static navigation is dynamic navigation. When using dynamic navigation, the value of the action attribute of the command button is a value binding expression resolving to a method returning a String in a managed bean. The method may then return different values based on certain conditions. Navigation would then proceed to a different page depending on the value of the method. As long as it returns a String, the managed bean method executed when using dynamic navigation can contain any logic inside it, and is frequently used to save data in a managed bean into a database. When using dynamic navigation, the return value of the method executed when clicking the button must match the name of the page we want to navigate to (again, minus the file extension). In earlier versions of JSF, it was necessary to specify navigation rules in facesconfig.xml, with the introduction of the conventions introduced in the previous paragraphs, this is no longer necessary.  
Read more
  • 0
  • 0
  • 14790
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-using-cloud-applications-and-containers
Xavier Bruhiere
10 Nov 2015
7 min read
Save for later

Using Cloud Applications and Containers

Xavier Bruhiere
10 Nov 2015
7 min read
We can find a certain comfort while developing an application on our local computer. We debug logs in real time. We know the exact location of everything, for we probably started it by ourselves. Make it work, make it right, make it fast - Kent Beck Optimization is the root of all devil - Donald Knuth So hey, we hack around until interesting results pop up (ok that's a bit exaggerated). The point is, when hitting the production server our code will sail a much different sea. And a much more hostile one. So, how to connect to third party resources ? How do you get a clear picture of what is really happening under the hood ? In this post we will try to answer those questions with existing tools. We won't discuss continuous integration or complex orchestration. Instead, we will focus on what it takes to wrap a typical program to make it run as a public service. A sample application Before diving into the real problem, we need some code to throw on remote servers. Our sample application below exposes a random key/value store over http. // app.js // use redis for data storage var Redis = require('ioredis'); // and express to expose a RESTFul API var express = require('express'); var app = express(); // connecting to redis server var redis = new Redis({ host: process.env.REDIS_HOST || '127.0.0.1', port: process.env.REDIS_PORT || 6379 }); // store random float at the given path app.post('/:key', function (req, res) { var key = req.params.key var value = Math.random(); console.log('storing', value,'at', key) res.json({set: redis.set(key, value)}); }); // retrieve the value at the given path app.get('/:key', function (req, res) { console.log('fetching value at ', req.params.key); redis.get(req.params.key).then(function(err, result) { res.json({ result: result || err }); }) }); var server = app.listen(3000, function () { var host = server.address().address; var port = server.address().port; console.log('Example app listening at http://%s:%s', host, port); }); And we define the following package.json and Dockerfile. { "name": "sample-app", "version": "0.1.0", "scripts": { "start": "node app.js" }, "dependencies": { "express": "^4.12.4", "ioredis": "^1.3.6", }, "devDependencies": {} } # Given a correct package.json, those two lines alone will properly install and run our code FROM node:0.12-onbuild # application's default port EXPOSE 3000 A Dockerfile ? Yeah, here is a first step toward cloud computation under control. Packing our code and its dependencies into a container will allow us to ship and launch the application with a few reproducible commands. # download official redis image docker pull redis # cd to the root directory of the app and build the container docker build -t article/sample . # assuming we are logged in to hub.docker.com, upload the resulting image for future deployment docker push article/sample Enough for the preparation, time to actually run the code. Service Discovery The server code needs a connection to redis. We can't hardcode it because host and port are likely to change under different deployments. Fortunately The Twelve-Factor App provides us with an elegant solution. The twelve-factor app stores config in environment variables (often shortened to env vars or env). Env vars are easy to change between deploys without changing any code; Indeed, this strategy integrates smoothly with an infrastructure composed of containers. docker run --detach --name redis redis # 7c5b7ff0b3f95e412fc7bee4677e1c5a22e9077d68ad19c48444d55d5f683f79 # fetch redis container virtual ip export REDIS_HOST=$(docker inspect -f '{{ .NetworkSettings.IPAddress }}' redis) # note : we don't specify REDIS_PORT as the redis container listens on the default port (6379) docker run -it --rm --name sample --env REDIS_HOST=$REDIS_HOST article/sample # > sample-app@0.1.0 start /usr/src/app # > node app.js # Example app listening at http://:::3000 In another terminal, we can check everything is working as expected. export SAMPLE_HOST=$(docker inspect -f '{{ .NetworkSettings.IPAddress }}' sample)) curl -X POST $SAMPLE_HOST:3000/test # {"set":{"isFulfilled":false,"isRejected":false}} curl -X GET $SAMPLE_HOST:3000/test # {"result":"0.5807915225159377"} We didn't precise any network informations but even so, containers can communicate. This method is widely used and projects like etcd or consul let us automate the whole process. Monitoring Performances can be a critical consideration for end-user experience or infrastructure costs. We should be able to identify bottlenecks or abnormal activities and once again, we will take advantage of containers and open source projects. Without modifying the running server, let's launch three new components to build a generic monitoring infrastructure. Influxdb is a fast time series database where we will store containers metrics. Since we properly defined the application into two single-purpose containers, it will give us an interesting overview of what's going on. # default parameters export INFLUXDB_PORT=8086 export INFLUXDB_USER=root export INFLUXDB_PASS=root export INFLUXDB_NAME=cadvisor # Start database backend docker run --detach --name influxdb --publish 8083:8083 --publish $INFLUXDB_PORT:8086 --expose 8090 --expose 8099 --env PRE_CREATE_DB=$INFLUXDB_NAME tutum/influxdb export INFLUXDB_HOST=$(docker inspect -f '{{ .NetworkSettings.IPAddress }}' influxdb) cadvisor Analyzes resource usage and performance characteristics of running containers. The command flags will instruct it how to use the database above to store metrics. docker run --detach --name cadvisor --volume=/var/run:/var/run:rw --volume=/sys:/sys:ro --volume=/var/lib/docker/:/var/lib/docker:ro --publish=8080:8080 google/cadvisor:latest --storage_driver=influxdb --storage_driver_user=$INFLUXDB_USER --storage_driver_password=$INFLUXDB_PASS --storage_driver_host=$INFLUXDB_HOST:$INFLUXDB_PORT --log_dir=/ # A live dashboard is available at $CADVISOR_HOST:8080/containers # We can also point the brower to $INFLUXDB_HOST:8083, with credentials above, to inspect containers data. # Query example: # > list series # > select time,memory_usage from stats where container_name='cadvisor' limit 1000 # More infos: https://github.com/google/cadvisor/blob/master/storage/influxdb/influxdb.go Grafana is a feature rich metrics dashboard and graph editor for Graphite, InfluxDB and OpenTSB. From its web interface, we will query the database and graph the metrics cadvisor collected and stored. docker run --detach --name grafana -p 8000:80 -e INFLUXDB_HOST=$INFLUXDB_HOST -e INFLUXDB_PORT=$INFLUXDB_PORT -e INFLUXDB_NAME=$INFLUXDB_NAME -e INFLUXDB_USER=$INFLUXDB_USER -e INFLUXDB_PASS=$INFLUXDB_PASS -e INFLUXDB_IS_GRAFANADB=true tutum/grafana # Get login infos generated docker logs grafana  Now we can head to localhost:8000 and build a custom dashboard to monitor the server. I won't repeat the comprehensive documentation but here is a query example: # note: cadvisor stores metrics in series named 'stats' select difference(cpu_cumulative_usage) where container_name='cadvisor' group by time 60s Grafana's autocompletion feature shows us what we can track : cpu, memory and network usage among other metrics. We all love screenshots and dashboards so here is a final reward for our hard work. Conclusion Development best practices and a good understanding of powerful tools gave us a rigorous workflow to launch applications with confidence. To sum up: Containers bundle code and requirements for flexible deployment and execution isolation. Environment stores third party services informations, giving developers a predictable and robust solution to read them. InfluxDB + Cadvisor + Grafana feature a complete monitoring solution independently of the project implementation. We fullfilled our expections but there's room for improvements. As mentioned, service discovery could be automated, but we also omitted how to manage logs. There are many discussions around this complex subject and we can expect shortly new improvements in our toolbox. About the author Xavier Bruhiere is the CEO of Hive Tech. He contributes to many community projects, including Occulus Rift, Myo, Docker and Leap Motion. In his spare time he enjoys playing tennis, the violin and the guitar. You can reach him at @XavierBruhiere.
Read more
  • 0
  • 0
  • 14686

article-image-team-project-setup-0
Packt
10 Feb 2016
12 min read
Save for later

Building Your Application

Packt
10 Feb 2016
12 min read
"Measuring programming progress by lines of code is like measuring aircraft building progress by weight."                                                                --Bill Gates In this article, by Tarun Arora, the author of the book Microsoft Team Foundation Server 2015 Cookbook, provides you information about: Configuring TFBuild Agent, Pool, and Queues Setting up a TFBuild Agent using an unattended installation (For more resources related to this topic, see here.) As a developer, compiling code and running unit tests gives you an assurance that your code changes haven't had an impact on the existing codebase. Integrating your code changes into the source control repository enables other users to validate their changes with yours. As a best practice, Teams integrate changes into the shared repository several times a day to reduce the risk of introducing breaking changes or worse, overwriting each other's. Continuous integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is verified by an automated build, allowing Teams to detect problems early. The automated build that runs as part of the CI process is often referred to as the CI build. There isn't a clear definition of what the CI build should do, but at the very minimum, it is expected to compile code and run unit tests. Running the CI build on a non-developer remote workspace helps identify the dependencies that may otherwise go unnoticed into the release process. We can talk endlessly about the benefits of CI; the key here is that it enables you to have potentially deployable software at all times. Deployable software is the most tangible asset to customers. Moving from concept to application, in this article, you'll learn how to leverage the build tooling in TFS to set up a quality-focused CI process. But first, let's have a little introduction to the build system in TFS. The following image illustrates the three generations of build systems in TFS: TFS has gone through three generations of build systems. The very first was MSBuild using XML for configuration; the next one was XAML using Windows Workflow Foundation for configuration, and now, there's TFBuild using JSON for configuration. The XAML-based build system will continue to be supported in TFS 2015. No automated migration path is available from XAML build to TFBuild. This is generally because of the difference in the architecture between the two build systems. The new build system in TFS is called Team Foundation Build (TFBuild). It is an extensible task-based execution system with a rich web interface that allows authoring, queuing, and monitoring builds. TFBuild is fully cross platform with the underlying build agents that are capable of running natively on both Windows and non-Windows platforms. TFBuild provides out-of-the-box integration with Centralized Version Control such as TFVC and Distributed Version Controls such as Git and GitHub. TFBuild supports building .NET, Java, Android, and iOS applications. All the recipes in this article are based on TFBuild. TFBuild is a task orchestrator that allows you to run any build engine, such as Ant, CMake, Gradle, Gulp, Grunt, Maven, MSBuild, Visual Studio, Xamarin, XCode, and so on. TFBuild supports work item integration, publishing drops, and publishing test execution results into the TFS that is independent of the build engine that you choose. The build agents are xCopyable and do not require any installation. The agents are auto-updating in nature; there's no need to update every agent in your infrastructure: TFBuild offers a rich web-based interface. It does not require Visual Studio to author or modify a build definition. From simple to complex, all build definitions can easily be created in the web portal. The web interface is accessible from any device and any platform: The build definition can be authored from the web portal directly A build definition is a collection of tasks. A task is simply a build step. Build definition can be composed by dragging and dropping tasks. Each task supports Enabled, Continue on error, and Always run flags making it easier to manage build definitions as the task list grows: The build system supports invoking PowerShell, batch, command line, and shell scripts. All out-of-the-box tasks are open source. If a task does not satisfy your requirements, you can download the task from GitHub at https://github.com/Microsoft/vso-agent-tasks and customize it. If you can't find a task, you can easily create one. You'll learn more about custom tasks in this article. Changes to build definitions can be saved as drafts. Build definitions maintain a history of all changes in the History tab. A side-by-side comparison of the changes is also possible. Comments entered when changing the build definition show up in the change history: Build definitions can be saved as templates. This helps standardize the use of certain tasks across new build definitions: An existing build definition can be saved as a template Multiple triggers can be set for the same build, including CI triggers and multiple scheduled triggers: Rule-based retention policies support the setting up of multiple rules. Retention can be specified by "days" or "number" of the builds: The build output logs are displayed in web portal in real time. The build log can be accessed from the console even after the build gets completed: The build reports have been revamped to offer more visibility into the build execution, and among other things, the test results can now directly be accessed from the web interface. The .trx file does not need to be downloaded into Visual Studio to view the test results: The old build system had restrictions on one Team Project Collection per build controller and one controller per build machine. TFBuild removes this restriction and supports the reuse of queues across multiple Team Project Collections. The following image illustrates the architecture of the new build system: In the preceding diagram, we observe the following: Multiple agents can be configured on one machine Agents from across different machines can be grouped into a pool Each pool can have only one queue One queue can be used across multiple Team Project Collections To demonstrate the capabilities of TFBuild, we'll use the FabrikamTFVC and FabrikamGit Team Projects. Configuring TFBuild Agent, Pool, and Queues In this recipe, you'll learn how to configure agents and create pools and queues. You'll also learn how a queue can be used across multiple Team Project Collections. Getting ready Scenario: At Fabrikam, the FabrikamTFVC and FabrikamGit Team Projects need their own build queues. The FabrikamTFVC Teams build process can be executed on a Windows Server. The FabrikamGit Team build process needs both Windows and OS X. The Teams want to set up three build agents on a Windows Server; one build agent on an OS X machine. The Teams want to group two Windows Agents into a Windows Pool for FabrikamTFVC Team and group one Windows and one Mac Agent into another pool for the FabrikamGit Team: Permission: To configure a build agent, you should be in the Build Administrators Group. The prerequisites for setting up the build agent on a Windows-based machine are as follows: The build agent should have a supporting version of Windows. The list of supported versions is listed at https://msdn.microsoft.com/en-us/Library/vs/alm/TFS/administer/requirements#Operatingsystems. The build agent should have Visual Studio 2013 or 2015. The build agent should have PowerShell 3 or a newer version. A build agent is configured for your TFS as part of the server installation process if you leave the Configure the build service to start automatically option selected: For the purposes of this recipe, we'll configure the agents from scratch. Delete the default pool or any other pool you have by navigating to the Agent pools option in the TFS Administration Console http://tfs2015:8080/tfs/_admin/_AgentPool: How to do it Log into the Windows machine that you desire to set the agents upon. Navigate to the Agent pools in the TFS Administration Console by browsing to http://tfs2015:8080/tfs/_admin/_AgentPool. Click on New Pool, enter the pool name as Pool 1, and uncheck Auto-Provision Queue in Project Collections: Click on the Download agent icon. Copy the downloaded folder into E: and unzip it into E:Win-A1. You can use any drive; however, it is recommended to use the non-operating system drive: Run the PowerShell console as an administrator and change the current path in PowerShell to the location of the agent in this case E:Win-A1. Call the ConfigureAgent.ps1 script in the PowerShell console and click on Enter. This will launch the Build Agent Configuration utility: Enter the configuration details as illustrated in the following screenshot: It is recommended to install the build agent as a service; however, you have an option to run the agent as an interactive process. This is great when you want to debug a build or want to temporarily use a machine as a build agent. The configuration process creates a JSON settings file; it creates the working and diagnostics folders: Refresh the Agent pools page in the TFS Administration Console. The newly configured agent shows up under Pool 1: Repeat steps 2 to 5 to configure Win-A2 in Pool 1. Repeat steps 1 to 5 to configure Win-A3 in Pool 2. It is worth highlighting that each agent runs from its individual folder: Now, log into the Mac machine and launch terminal: Install the agent installer globally by running the commands illustrated here. You will be required to enter the machine password to authorize the install: This will download the agent in the user profile, shown as follows: The summary of actions performed when the agent is downloaded Run the following command to install the agent installer globally for the user profile: Running the following command will create a new directory called osx-A1 for the agent; create the agent in the directory: The agent installer has been copied from the user profile into the agent directory, shown as follows: Pass the following illustrated parameters to configure the agent: This completes the configuration of the xPlatform agent on the Mac. Refresh the Agent pools page in the TFS Administration Console to see the agent appear in Pool 2: The build agent has been configured at the Team Foundation Server level. In order to use the build agent for a Team Project Collection, a mapping between the build agent and Team Project Collection needs to be established. This is done by creating queues. To configure queues, navigate to the Collection Administration Console by browsing to http://tfs2015:8080/tfs/DefaultCollection/_admin/_BuildQueue. From the Build tab, click on New queue; this dialog allows you to reference the pool as a queue: Map Pool 1 as Queue 1 and Pool 2 as Queue 2 as shown here: The TFBuild Agent, Pools, and Queues are now ready to use. The green bar before the agent name and queue in the administration console indicates that the agent and queues are online. How it works... To test the setup, create a new build definition by navigating to the FabrikamTFVC Team Project Build hub by browsing to http://tfs2015:8080/tfs/DefaultCollection/FabrikamTFVC/_build. Click on the Add a new build definition icon. In the General tab, you'll see that the queues show up under the Queue dropdown menu. This confirms that the queues have been correctly configured and are available for selection in the build definition: Pools can be used across multiple Team Project Collections. As illustrated in the following screenshot, in Team Project Collection 2, clicking on the New queue... shows that the existing pools are already mapped in the default collection: Setting up a TFBuild Agent using an unattended installation The new build framework allows the unattended setup of build agents by injecting a set of parameter values via script. This technique can be used to spin up new agents to be attached into an existing agent pool. In this recipe, you'll learn how to configure and unconfigure a build agent via script. Getting ready Scenario: The FabrikamTFVC Team wants the ability to install, configure, and unconfigure a build agent directly via script without having to perform this operation using the Team Portal. Permission: To configure a build agent, you should be in the Build Administrators Group. Download the build agent as discussed in the earlier recipe Configuring TFBuild Agent, Pool, and Queues. Copy the folder to E:Agent. The script refers to this Agent folder. How to do it... Launch PowerShell in the elevated mode and execute the following command: .AgentVsoAgent.exe /Configure /RunningAsService /ServerUrl:"http://tfs2015:8080/tfs" /WindowsServiceLogonAccount:svc_build /WindowsServiceLogonPassword:xxxxx /Name:WinA-10 /PoolName:"Pool 1" /WorkFolder:"E:Agent_work" /StartMode:Automatic Replace the value of the username and password accordingly. Executing the script will result in the following output: The script installs an agent by the name WinA-10 as Windows Service running as svc_build. The agent is added to Pool 1: To unconfigure WinA-10, run the following command in an elevated PowerShell prompt: .AgentVsoAgent.exe /Unconfigure "vsoagent.tfs2015.WinA-10" To unconfigure, script needs to be executed from outside the scope of the Agent folder. Running the script from within the Agent folder scope will result in an error message. How it works... The new build agent natively allows configuration via script. A new capability called Personal Access Token (PAT) is due for release in the future updates of TFS 2015. PAT allows you to generate a personal OAuth token for a specific scope; it replaces the need to key in passwords into configuration files. Summary In this article, we have looked at configuring TFBuild Agent, Pool, and Queues and setting up a TFBuild Agent using an unattended installation. Resources for Article: Further resources on this subject: Overview of Process Management in Microsoft Visio 2013 [article] Introduction to the Raspberry Pi's Architecture and Setup [article] Implementing Microsoft Dynamics AX [article]
Read more
  • 0
  • 0
  • 14599

article-image-building-applications-spring-data-redis
Packt
03 Dec 2012
9 min read
Save for later

Building Applications with Spring Data Redis

Packt
03 Dec 2012
9 min read
(For more resources related on Spring, see here.) Designing a Redis data model The most important rules of designing a Redis data model are: Redis does not support ad hoc queries and it does not support relations in the same way than relational databases. Thus, designing a Redis data model is a total different ballgame than designing the data model of a relational database. The basic guidelines of a Redis data model design are given as follows: Instead of simply modeling the information stored in our data model, we have to also think how we want to search information from it. This often leads to a situation where we have to duplicate data in order to fulfill the requirements given to us. Don't be afraid to do this. We should not concentrate on normalizing our data model. Instead, we should combine the data that we need to handle as an unit into an aggregate. Since Redis does not support relations, we have to design and implement these relations by using the supported data structures. This means that we have to maintain these relations manually when they are changed. Because this might require a lot of effort and code, it could be wise to simply duplicate the information instead of using relations. It is always wise to spend a moment to verify that we are using the correct tool for the job. NoSQL Distilled, by Martin Fowler contains explanations of different NoSQL databases and their use cases, and can be found at http://martinfowler.com/books/nosql.html. Redis supports multiple data structures. However, one question remained unanswered: which data structure should we use for our data? This question is addressed in the following table: Data type Description String A string is good choice for storing information that is already converted to a textual form. For instance, if we want to store HTML, JSON, or XML, a string should be our weapon of choice. List A list is a good choice if we will access it only near the start or end. This means that we should use it for representing queues or stacks. Set We should use a set if we need to get the size of a collection or check if a certain item belongs to it. Also, if we want to represent relations, a set is a good choice (for example, "who are John's friends?"). Sorted set Sorted sets should be used in the same situations as sets when the ordering of items is important to us. Hash A hash is a perfect data structure for representing complex objects. Key components Spring Data Redis provides certain components that are the cornerstones of each application that uses it. This section provides a brief introduction to the components that we will later use to implement our example applications. Atomic counters Atomic counters are for Redis what sequences are for relational databases. Atomic counters guarantee that the value received by a client is unique. This makes these counters a perfect tool for creating unique IDs to our data that is stored in Redis. At the moment, Spring Data Redis offers two atomic counters: RedisAtomicInteger and RedisAtomicLong . These classes provide atomic counter operations for integers and longs. RedisTemplate The RedisTemplate<K,V> class is the central component of Spring Data Redis. It provides methods that we can use to communicate with a Redis instance. This class requires that two type parameters are given during its instantiation: the type of used Redis key and the type of the Redis value. Operations The RedisTemplate class provides two kinds of operations that we can use to store, fetch, and remove data from our Redis instance: Operations that require that the key and the value are given every time an operation is performed. These operations are handy when we have to execute a single operation by using a key and a value. Operations that are bound to a specific key that is given only once. We should use this approach when we have to perform multiple operations by using the same key. The methods that require that a key and value is given every time an operation is performed are described in following list: HashOperations<K,HK,HV> opsForHash(): This method returns the operations that are performed on hashes ListOperations<K,V> opsForList(): This method returns the operations performed on lists SetOperations<K,V> opsForSet(): This method returns the operations performed on sets ValueOperations<K,V> opsForValue(): This method returns the operations performed on simple values ZSetOperations<K,HK,HV> opsForZSet(): This method returns the operations performed on sorted sets The methods of the RedisTemplate class that allow us to execute multiple operations by using the same key are described in following list: BoundHashOperarations<K,HK,HV> boundHashOps(K key): This method returns hash operations that are bound to the key given as a parameter BoundListOperations<K,V> boundListOps(K key): This method returns list operations bound to the key given as a parameter BoundSetOperations<K,V> boundSetOps(K key):: This method returns set operations, which are bound to the given key BoundValueOperations<K,V> boundValueOps(K key): This method returns operations performed to simple values that are bound to the given key BoundZSetOperations<K,V> boundZSetOps(K key): This method returns operations performed on sorted sets that are bound to the key that is given as a parameter The differences between these operations become clear to us when we start building our example applications. Serializers Because the data is stored in Redis as bytes, we need a method for converting our data to bytes and vice versa. Spring Data Redis provides an interface called RedisSerializer<T>, which is used in the serialization process. This interface has one type parameter that describes the type of the serialized object. Spring Data Redis provides several implementations of this interface. These implementations are described in the following table: Serializer Description GenericToStringSerializer<T> Serializes strings to bytes and vice versa. Uses the Spring ConversionService to transform objects to strings and vice versa. JacksonJsonRedisSerializer<T> Converts objects to JSON and vice versa. JdkSerializationRedisSerializer Provides Java based serialization to objects. OxmSerializer Uses the Object/XML mapping support of Spring Framework 3. StringRedisSerializer  Converts strings to bytes and vice versa. We can customize the serialization process of the RedisTemplate class by using the described serializers. The RedisTemplate class provides flexible configuration options that can be used to set the serializers that are used to serialize value keys, values, hash keys, hash values, and string values. The default serializer of the RedisTemplate class is JdkSerializationRedisSerializer. However, the string serializer is an exception to this rule. StringRedisSerializer is the serializer that is by default used to serialize string values. Implementing a CRUD application This section describes two different ways for implementing a CRUD application that is used to manage contact information. First, we will learn how we can implement a CRUD application by using the default serializer of the RedisTemplate class. Second, we will learn how we can use value serializers and implement a CRUD application that stores our data in JSON format. Both of these applications will also share the same domain model. This domain model consists of two classes: Contact and Address. We removed the JPA specific annotations from them We use these classes in our web layer as form objects and they no longer have any other methods than getters and setters The domain model is not the only thing that is shared by these examples. They also share the interface that declares the service methods for the Contact class. The source code of the ContactService interface is given as follows: public interface ContactService {public Contact add(Contact added);public Contact deleteById(Long id) throws NotFoundException;public List&lt;Contact&gt; findAll();public Contact findById(Long id) throws NotFoundException;public Contact update(Contact updated) throws NotFoundException;} Both of these applications will communicate with the used Redis instance by using the Jedis connector. Regardless of the user's approach, we can implement a CRUD application with Spring Data Redis by following these steps: Configure the application context. Implement the CRUD functions. Let's get started and find out how we can implement the CRUD functions for contact information. Using default serializers This subsection describes how we can implement a CRUD application by using the default serializers of the RedisTemplate class. This means that StringRedisSerializer is used to serialize string values, and JdkSerializationRedisSerializer serializes other objects. Configuring the application context We can configure the application context of our application by making the following changes to the ApplicationContext class: Configuring the Redis template bean. Configuring the Redis atomic long bean. Configuring the Redis template bean We can configure the Redis template bean by adding a redisTemplate() method to the ApplicationContext class and annotating this method with the @Bean annotation. We can implement this method by following these steps: Create a new RedisTemplate object. Set the used connection factory to the created RedisTemplate object. Return the created object. The source code of the redisTemplate() method is given as follows: @Beanpublic RedisTemplate redisTemplate() {RedisTemplate&lt;String, String&gt; redis = new RedisTemplate&lt;String,String&gt;();redis.setConnectionFactory(redisConnectionFactory());return redis;} Configuring the Redis atomic long bean We start the configuration of the Redis atomic long bean by adding a method called redisAtomicLong() to the ApplicationContext class and annotating the method with the @Bean annotation. Our next task is to implement this method by following these steps: Create a new RedisAtomicLong object. Pass the name of the used Redis counter and the Redis connection factory as constructor parameters. Return the created object. The source code of the redisAtomicLong() method is given as follows: @Beanpublic RedisAtomicLong redisAtomicLong() {return new RedisAtomicLong("contact", redisConnectionFactory());} If we need to create IDs for instances of different classes, we can use the same Redis counter. Thus, we have to configure only one Redis atomic long bean.
Read more
  • 0
  • 0
  • 14558

article-image-how-add-unit-tests-sails-framework-application
Luis Lobo
26 Sep 2016
8 min read
Save for later

How to add Unit Tests to a Sails Framework Application

Luis Lobo
26 Sep 2016
8 min read
There are different ways to implement Unit Tests for a Node.js application. Most of them use Mocha, for their test framework, Chai as the assertion library, and some of them include Istanbul for Code Coverage. We will be using those tools, not entering in deep detail on how to use them but rather on how to successfully configure and implement them for a Sails project. 1) Creating a new application from scratch (if you don't have one already) First of all, let’s create a Sails application from scratch. The Sails version in use for this article is 0.12.3. If you already have a Sails application, then you can continue to step 2. Issuing the following command creates the new application: $ sails new sails-test-article Once we create it, we will have the following file structure: ./sails-test-article ├── api │ ├── controllers │ ├── models │ ├── policies │ ├── responses │ └── services ├── assets │ ├── images │ ├── js │ │ └── dependencies │ ├── styles │ └── templates ├── config │ ├── env │ └── locales ├── tasks │ ├── config │ └── register └── views 2) Create a basic test structure We want a folder structure that contains all our tests. For now we will only add unit tests. In this project we want to test only services and controllers. Add necessary modules npm install --save-dev mocha chai istanbul supertest Folder structure Let's create the test folder structure that supports our tests: mkdir -p test/fixtures test/helpers test/unit/controllers test/unit/services After the creation of the folders, we will have this structure: ./sails-test-article ├── api [...] ├── test │ ├── fixtures │ ├── helpers │ └── unit │ ├── controllers │ └── services └── views We now create a mocha.opts file inside the test folder. It contains mocha options, such as a timeout per test run, that will be passed by default to mocha every time it runs. One option per line, as described in mocha opts. --require chai --reporter spec --recursive --ui bdd --globals sails --timeout 5s --slow 2000 Up to this point, we have all our tools set up. We can do a very basic test run: mocha test It prints out this: 0 passing (2ms) Normally, Node.js applications define a test script in the packages.json file. Edit it so that it now looks like this: "scripts": { "debug": "node debug app.js", "start": "node app.js", "test": "mocha test" } We are ready for the next step. 3) Bootstrap file The boostrap.js file is the one that defines the environment that all tests use. Inside it, we define before and after events. In them, we are starting and stopping (or 'lifting' and 'lowering' in Sails language) our Sails application. Since Sails makes globally available models, controller, and services at runtime, we need to start them here. var sails = require('sails'); var _ = require('lodash'); global.chai = require('chai'); global.should = chai.should(); before(function (done) { // Increase the Mocha timeout so that Sails has enough time to lift. this.timeout(5000); sails.lift({ log: { level: 'silent' }, hooks: { grunt: false }, models: { connection: 'unitTestConnection', migrate: 'drop' }, connections: { unitTestConnection: { adapter: 'sails-disk' } } }, function (err, server) { if (err) returndone(err); // here you can load fixtures, etc. done(err, sails); }); }); after(function (done) { // here you can clear fixtures, etc. if (sails && _.isFunction(sails.lower)) { sails.lower(done); } }); This file will be required on each of our tests. That way, each test can individually be run if needed, or run as a whole. 4) Services tests We now are adding two models and one service to show how to test services: Create a Comment model in /api/models/Comment.js: /** * Comment.js */ module.exports = { attributes: { comment: {type: 'string'}, timestamp: {type: 'datetime'} } }; /** * Comment.js */ module.exports = { attributes: { comment: {type: 'string'}, timestamp: {type: 'datetime'} } }; /** * Comment.js */ module.exports = { attributes: { comment: {type: 'string'}, timestamp: {type: 'datetime'} } }; /** * Comment.js */ module.exports = { attributes: { comment: {type: 'string'}, timestamp: {type: 'datetime'} } }; Create a Post model in /api/models/Post.js: /** * Post.js */ module.exports = { attributes: { title: {type: 'string'}, body: {type: 'string'}, timestamp: {type: 'datetime'}, comments: {model: 'Comment'} } }; Create a Post service in /api/services/PostService.js: /** * PostService * * @description :: Service that handles posts */ module.exports = { getPostsWithComments: function () { return Post .find() .populate('comments'); } }; To test the Post service, we need to create a test for it in /test/unit/services/PostService.spec.js. In the case of services, we want to test business logic. So basically, you call your service methods and evaluate the results using an assertion library. In this case, we are using Chai's should. /* global PostService */ // Here is were we init our 'sails' environment and application require('../../bootstrap'); // Here we have our tests describe('The PostService', function () { before(function (done) { Post.create({}) .then(Post.create({}) .then(Post.create({}) .then(function () { done(); }) ) ); }); it('should return all posts with their comments', function (done) { PostService .getPostsWithComments() .then(function (posts) { posts.should.be.an('array'); posts.should.have.length(3); done(); }) .catch(done); }); }); We can now test our service by running: npm test The result should be similar to this one: > sails-test-article@0.0.0 test /home/lobo/dev/luislobo/sails-test-article > mocha test The PostService ✓ should return all posts with their comments 1 passing (979ms) 5) Controllers tests In the case of controllers, we want to validate that our requests are working, that they are returning the correct error codes and the correct data. In this case, we make use of the SuperTest module, which provides HTTP assertions. We add now a Post controller with this content in /api/controllers/PostController.js: /** * PostController */ module.exports = { getPostsWithComments: function (req, res) { PostService.getPostsWithComments() .then(function (posts) { res.ok(posts); }) .catch(res.negotiate); } }; And now we create a Post controller test in: /test/unit/controllers/PostController.spec.js: // Here is were we init our 'sails' environment and application var supertest = require('supertest'); require('../../bootstrap'); describe('The PostController', function () { var createdPostId = 0; it('should create a post', function (done) { var agent = supertest.agent(sails.hooks.http.app); agent .post('/post') .set('Accept', 'application/json') .send({"title": "a post", "body": "some body"}) .expect('Content-Type', /json/) .expect(201) .end(function (err, result) { if (err) { done(err); } else { result.body.should.be.an('object'); result.body.should.have.property('id'); result.body.should.have.property('title', 'a post'); result.body.should.have.property('body', 'some body'); createdPostId = result.body.id; done(); } }); }); it('should get posts with comments', function (done) { var agent = supertest.agent(sails.hooks.http.app); agent .get('/post/getPostsWithComments') .set('Accept', 'application/json') .expect('Content-Type', /json/) .expect(200) .end(function (err, result) { if (err) { done(err); } else { result.body.should.be.an('array'); result.body.should.have.length(1); done(); } }); }); it('should delete post created', function (done) { var agent = supertest.agent(sails.hooks.http.app); agent .delete('/post/' + createdPostId) .set('Accept', 'application/json') .expect('Content-Type', /json/) .expect(200) .end(function (err, result) { if (err) { returndone(err); } else { returndone(null, result.text); } }); }); }); After running the tests again: npm test We can see that now we have 4 tests: > sails-test-article@0.0.0 test /home/lobo/dev/luislobo/sails-test-article > mocha test The PostController ✓ should create a post ✓ should get posts with comments ✓ should delete post created The PostService ✓ should return all posts with their comments 4 passing (1s) 6) Code Coverage Finally, we want to know if our code is being covered by our unit tests, with the help of Istanbul. To generate a report, we just need to run: istanbul cover _mocha test Once we run it, we will have a result similar to this one: The PostController ✓ should create a post ✓ should get posts with comments ✓ should delete post created The PostService ✓ should return all posts with their comments 4 passing (1s) ============================================================================= Writing coverage object [/home/lobo/dev/luislobo/sails-test-article/coverage/coverage.json] Writing coverage reports at [/home/lobo/dev/luislobo/sails-test-article/coverage] ============================================================================= =============================== Coverage summary =============================== Statements : 26.95% ( 45/167 ) Branches : 3.28% ( 4/122 ) Functions : 35.29% ( 6/17 ) Lines : 26.95% ( 45/167 ) ================================================================================ In this case, we can see that the percentages are not very nice. We don't have to worry much about these since most of the “not covered” code is in /api/policies and /api/responses. You can check that result in a file that was created after istanbul ran, in ./coverage/lcov-report/index.html. If you remove those folders and run it again, you will see the difference. rm -rf api/policies api/responses istanbul cover _mocha test ⬡ 4.4.2 [±master ●●●] Now the result is much better: 100% coverage! The PostController ✓ should create a post ✓ should get posts with comments ✓ should delete post created The PostService ✓ should return all posts with their comments 4 passing (1s) ============================================================================= Writing coverage object [/home/lobo/dev/luislobo/sails-test-article/coverage/coverage.json] Writing coverage reports at [/home/lobo/dev/luislobo/sails-test-article/coverage] ============================================================================= =============================== Coverage summary =============================== Statements : 100% ( 24/24 ) Branches : 100% ( 0/0 ) Functions : 100% ( 4/4 ) Lines : 100% ( 24/24 ) ================================================================================ Now if you check the report again, you will see a different picture: Coverage report You can get the source code for each of the steps here. I hope you enjoyed the post! Reference Sails documentation on Testing your code Follows recommendations from Sails author, Mike McNeil, Adds some extra stuff based on my own experience developing applications using Sails Framework. About the author Luis Lobo Borobia is the CTO at FictionCity.NET, mentor and advisor, independent software engineer, consultant, and conference speaker. He has a background as a software analyst and designer—creating, designing, and implementing software products and solutions, frameworks, and platforms for several kinds of industries. In the last few years, he has focused on research and development for the Internet of Things using the latest bleeding-edge software and hardware technologies available.
Read more
  • 0
  • 1
  • 14264
article-image-exception-handling-python
Packt
17 Aug 2016
10 min read
Save for later

Exception Handling with Python

Packt
17 Aug 2016
10 min read
In this article, by Ninad Sathaye, author of the book, Learning Python Application Development, you will learn techniques to make the application more robust by handling exceptions Specifically, we will cover the following topics: What are the exceptions in Python? Controlling the program flow with the try…except clause Dealing with common problems by handling exceptions Creating and using custom exception classes (For more resources related to this topic, see here.) Exceptions Before jumping straight into the code and fixing these issues, let's first understand what an exception is and what we mean by handling an exception. What is an exception? An exception is an object in Python. It gives us information about an error detected during the program execution. The errors noticed while debugging the application were unhandled exceptions as we didn't see those coming. Later in the article,you will learn the techniques to handle these exceptions. The ValueError and IndexErrorexceptions seen in the earlier tracebacks are examples of built-in exception types in Python. In the following section, you will learn about some other built-in exceptions supported in Python. Most common exceptions Let's quickly review some of the most frequently encountered exceptions. The easiest way is to try running some buggy code and let it report the problem as an error traceback! Start your Python interpreter and write the following code: Here are a few more exceptions: As you can see, each line of the code throws a error tracebackwith an exception type (shown highlighted). These are a few of the built-in exceptions in Python. A comprehensive list of built-in exceptions can be found in the following documentation:https://docs.python.org/3/library/exceptions.html#bltin-exceptions Python provides BaseException as the base class for all built-in exceptions. However, most of the built-in exceptions do not directly inherit BaseException. Instead, these are derived from a class called Exception that in turn inherits from BaseException. The built-in exceptions that deal with program exit (for example, SystemExit) are derived directly from BaseException. You can also create your own exception class as a subclass of Exception. You will learn about that later in this article. Exception handling So far, we saw how the exceptions occur. Now, it is time to learn how to use thetry…except clause to handle these exceptions. The following pseudocode shows a very simple example of the try…except clause: Let's review the preceding code snippet: First, the program tries to execute the code inside thetryclause. During this execution, if something goes wrong (if an exception occurs), it jumps out of this tryclause. The remaining code in the try block is not executed. It then looks for an appropriate exception handler in theexceptclause and executes it. The exceptclause used here is a universal one. It will catch all types of exceptions occurring within thetryclause. Instead of having this "catch-all" handler, a better practice is to catch the errors that you anticipate and write an exception handling code specific to those errors. For example, the code in thetryclause might throw an AssertionError. Instead of using the universalexcept clause, you can write a specific exception handler, as follows: Here, we have an except clause that exclusively deals with AssertionError. What it also means is that any error other than the AssertionError will slip through as an unhandled exception. For that, we need to define multipleexceptclauses with different exception handlers. However, at any point of time, only one exception handler will be called. This can be better explained with an example. Let's take a look at the following code snippet: Thetry block calls solve_something(). This function accepts a number as a user input and makes an assertion that the number is greater than zero. If the assertion fails, it jumps directly to the handler, except AssertionError. In the other scenario, with a > 0, the rest of the code in solve_something() is executed. You will notice that the variable xis not defined, which results in NameError. This exception is handled by the other exception clause, except NameError. Likewise, you can define specific exception handlers for anticipated errors. Raising and re-raising an exception Theraisekeyword in Python is used to force an exception to occur. Put another way, it raises an exception. The syntax is simple; just open the Python interpreter and type: >>> raise AssertionError("some error message") This produces the following error traceback: Traceback (most recent call last): File "<stdin>", line 1, in <module> AssertionError : some error message In some situations, we need to re-raise an exception. To understand this concept better, here is a trivial scenario. Suppose, in thetryclause, you have an expression that divides a number by zero. In ordinary arithmetic, this expression has no meaning. It's a bug! This causes the program to raise an exception called ZeroDivisionError. If there is no exception handling code, the program will just print the error message and terminate. What if you wish to write this error to some log file and then terminate the program? Here, you can use anexceptclause to log the error first. Then, use theraisekeyword without any arguments to re-raise the exception. The exception will be propagated upwards in the stack. In this example, it terminates the program. The exception can be re-raised with the raise keyword without any arguments. Here is an example that shows how to re-raise an exception: As can be seen, adivision by zeroexception is raised while solving the a/b expression. This is because the value of variable b is set to 0. For illustration purposes, we assumed that there is no specific exception handler for this error. So, we will use the general except clause where the exception is re-raised after logging the error. If you want to try this yourself, just write the code illustrated earlier in a new Python file, and run it from a terminal window. The following screenshot shows the output of the preceding code: The else block of try…except There is an optionalelseblock that can be specified in the try…except clause. The elseblock is executed only ifno exception occurs in the try…except clause. The syntax is as follows: Theelseblock is executed before thefinallyclause, which we will study next. finally...clean it up! There is something else to add to the try…except…else story:an optional finally clause. As the name suggests, the code within this clause is executed at the end of the associated try…except block. Whether or not an exception is raised, the finally clause, if specified, willcertainly get executed at the end of thetry…except clause. Imagine it as anall-weather guaranteegiven by Python! The following code snippet shows thefinallyblock in action: Running this simple code will produce the following output: $ python finally_example1.py Enter a number: -1 Uh oh..Assertion Error. Do some special cleanup The last line in the output is theprintstatement from the finally clause. The code snippets with and without the finally clause are are shown in the following screenshot. The code in the finallyclause is assured to be executed in the end, even when the except clause instructs the code to return from the function. Thefinallyclause is typically used to perform clean-up tasks before leaving the function. An example use case is to close a database connection or a file. However, note that, for this purpose you can also use thewith statement in Python. Writing a new exception class It is trivial to create a new exception class derived from Exception. Open your Python interpreter and create the following class: >>> class GameUnitError(Exception): ... pass ... >>> That's all! We have a new exception class,GameUnitError, ready to be deployed. How to test this exception? Just raise it. Type the following line of code in your Python interpreter: >>> raise GameUnitError("ERROR: some problem with game unit") Raising the newly created exception will print the following traceback: >>> raise GameUnitError("ERROR: some problem with game unit") Traceback (most recent call last): File "<stdin>", line 1, in <module> __main__.GameUnitError: ERROR: some problem with game unit Copy the GameUnitError class into its own module, gameuniterror.py, and save it in the same directory as attackoftheorcs_v1_1.py. Next, update the attackoftheorcs_v1_1.py file to include the following changes: First, add the following import statement at the beginning of the file: from gameuniterror import GameUnitError The second change is in the AbstractGameUnit.heal method. The updated code is shown in the following code snippet. Observe the highlighted code that raises the custom exception whenever the value ofself.health_meterexceeds that of self.max_hp. With these two changes, run heal_exception_example.py created earlier. You will see the new exception being raised, as shown in the following screenshot: Expanding the exception class Can we do something more with the GameUnitError class? Certainly! Just like any other class, we can define attributes and use them. Let's expand this class further. In the modified version, it will accept an additional argument and some predefined error code. The updated GameUnitError class is shown in the following screenshot: Let's take a look at the code in the preceding screenshot: First, it calls the __init__method of the Exceptionsuperclass and then defines some additional instance variables. A new dictionary object,self.error_dict, holds the error integer code and the error information as key-value pairs. The self.error_message stores the information about the current error depending on the error code provided. The try…except clause ensures that error_dict actually has the key specified by thecodeargument. It doesn't in the except clause, we just retrieve the value with default error code of 000. So far, we have made changes to the GameUnitError class and the AbstractGameUnit.heal method. We are not done yet. The last piece of the puzzle is to modify the main program in the heal_exception_example.py file. The code is shown in the following screenshot: Let's review the code: As the heal_by value is too large, the heal method in the try clause raises the GameUnitError exception. The new except clause handles the GameUnitError exception just like any other built-in exceptions. Within theexceptclause, we have twoprintstatements. The first one prints health_meter>max_hp!(recall that when this exception was raised in the heal method, this string was given as the first argument to the GameUnitError instance). The second print statement retrieves and prints the error_message attribute of the GameUnitError instance. We have got all the changes in place. We can run this example form a terminal window as: $ python heal_exception_example.py The output of the program is shown in the following screenshot: In this simple example, we have just printed the error information to the console. You can further write verbose error logs to a file and keep track of all the error messages generated while the application is running. Summary This article served as an introduction to the basics of exception handling in Python. We saw how the exceptions occur, learned about some common built-in exception classes, and wrote simple code to handle these exceptions using thetry…except clause. The article also demonstrated techniques, such as raising and re-raising exceptions, using thefinally clause, and so on. The later part of the article focused on implementing custom exception classes. We defined a new exception class and used it for raising custom exceptions for our application. With exception handling, the code is in a better shape. Resources for Article: Further resources on this subject: Mining Twitter with Python – Influence and Engagement [article] Exception Handling in MySQL for Python [article] Python LDAP applications - extra LDAP operations and the LDAP URL library [article]
Read more
  • 0
  • 0
  • 14238

article-image-running-firefox-os-simulators-webide
Packt
12 Oct 2015
9 min read
Save for later

Running Firefox OS Simulators with WebIDE

Packt
12 Oct 2015
9 min read
In this article by Tanay Pant, the author of the book, Learning Firefox OS Application Development, you will learn how to use WebIDE and its features. We will start by installing Firefox OS simulators in the WebIDE so that we can run and test Firefox OS applications in it. Then, we will study how to install and create new applications with WebIDE. Finally, we will cover topics such as using developer tools for applications that run in WebIDE, and uninstalling applications in Firefox OS. In brief, we will go through the following topics: Getting to know about WebIDE Installing Firefox OS simulator Installing and creating new apps with WebIDE Using developer tools inside WebIDE Uninstalling applications in Firefox OS (For more resources related to this topic, see here.) Introducing WebIDE It is now time to have a peek at Firefox OS. You can test your applications in two ways, either by running it on a real device or by running it in Firefox OS Simulator. Let's go ahead with the latter option since you might not have a Firefox OS device yet. We will use WebIDE, which comes preinstalled with Firefox, to accomplish this task. If you haven't installed Firefox yet, you can do so from https://www.mozilla.org/en-US/firefox/new/. WebIDE allows you to install one or several runtimes (different versions) together. You can use WebIDE to install different types of applications, debug them using Firefox's Developer Tools Suite, and edit the applications/manifest using the built-in source editor. After you install Firefox, open WebIDE. You can open it by navigating to Tools | Web Developer | WebIDE. Let's now take a look at the following screenshot of WebIDE: You will notice that on the top-right side of your window, there is a Select Runtime option. When you click on it, you will see the Install Simulator option. Select that option, and you will see a page titled Extra Components. It presents a list of Firefox OS simulators. We will install the latest stable and unstable versions of Firefox OS. We installed two versions of Firefox OS because we would need both the latest and stable versions to test our applications in the future. After you successfully install both the simulators, click on Select Runtime. This will now show both the OS versions listed, as shown in the following screenshot:. Let's open Firefox OS 3.0. This will open up a new window titled B2G. You should now explore Firefox OS, take a look at its applications, and interact with them. It's all HTML, CSS and JavaScript. Wonderful, isn't it? Very soon, you will develop applications like these:` Installing and creating new apps using WebIDE To install or create a new application, click on Open App in the top-left corner of the WebIDE window. You will notice that there are three options: New App, Open Packaged App, and Open Hosted App. For now, think of Hosted apps like websites that are served from a web server and are stored online in the server itself but that can still use appcache and indexeddb to store all their assets and data offline, if desired. Packaged apps are distributed in a .zip format and they can be thought of as the source code of the website bundled and distributed in a ZIP file. Let's now head to the first option in the Open App menu, which is New App. Select the HelloWorld template, enter Project Name, and click on OK. After completing this, the WebIDE will ask you about the directory where you want to store the application. I have made a new folder named Hello World for this purpose on the desktop. Now, click on Open button and finally, click again on the OK button. This will prepare your app and show details, such as Title, Icon, Description, Location and App ID of your application. Note that beneath the app title, it says Packaged Web. Can you figure out why? As we discussed, it is because of the fact that we are not serving the application online, but from a packaged directory that holds its source code. This covers the right-hand side panel. In the left-hand side panel, we have the directory listing of the application. It contains an icon folder that holds different-sized icons for different screen resolutions It also contains the app.js file, which is the engine of the application and will contain the functionality of the application; index.html, which will contain the markup data for the application; and finally, the manifest.webapp file, which contains crucial information and various permissions about the application. If you click on any filename, you will notice that the file opens in an in-browser editor where you can edit the files to make changes to your application and save them from here itself. Let's make some edits in the application— in app.js and index.html. I have replaced World with Firefox everywhere to make it Hello Firefox. Let's make the same changes in the manifest file. The manifest file contains details of your application, such as its name, description, launch path, icons, developer information, and permissions. These details are used to display information about your application in the WebIDE and Firefox Marketplace. The manifest file is in JSON format. I went ahead and edited developer information in the application as well, to include my name and my website. After saving all the files, you will notice that the information of the app in the WebIDE has changed! It's now time to run the application in Firefox OS. Click on Select Runtime and fire up Firefox OS 3.0. After it is launched, click on the Play button in the WebIDE hovering on which is the prompt that says Install and Run. Doing this will install and launch the application on your simulator! Congratulations, you installed your first Firefox OS application! Using developer tools inside WebIDE WebIDE allows you to use Firefox's awesome developer tools for applications that run in the Simulator via WebIDE as well. To use them, simply click on the Settings icon (which looks like a wrench) beside the Install and Run icon that you had used to get the app installed and running. The icon says Debug App on hovering the cursor over it. Click on this to reveal developer tools for the app that is running via WebIDE. Click on Console, and you will see the message Hello Firefox, which we gave as the input in console.log() in the app.js file. Note that it also specifies the App ID of our application while displaying Hello Firefox. You may have noticed in the preceding illustration that I sent a command via the console alert('Hello Firefox'); and it simultaneously executed the instruction in the app running in the simulator. As you may have noticed, Firefox OS customizes the look and feel of components, such as the alert box (this is browser based). Our application is running in an iframe in Gaia. Every app, including the keyboard application, runs in an iframe for security reasons. You should go through these tools to get a hang of the debugging capabilities if you haven't done so already! One more important thing that you should keep in mind is that inline scripts (for example, <a href="#" onclick="alert(this)">Click Me</a>) are forbidden in Firefox OS apps, due to Content Security Policy (CSP) restrictions. CSP restrictions include the remote scripts, inline scripts, javascript URIs, function constructor, dynamic code execution, and plugins, such as Flash or Shockwave. Remote styles are also banned. Remote Web Workers and eval() operators are not allowed for security reasons and they show 400 error and security errors respectively upon usage. You are warned about CSP violations when submitting your application to the Firefox OS Marketplace. CSP warnings in the validator will not impact whether your app is accepted into the Marketplace. However, if your app is privileged and violates the CSP, you will be asked to fix this issue in order to get your application accepted. Browsing other runtime applications You can also take a look at the source code of the preinstalled/runtime apps that are present in Firefox OS or Gaia, to be precise. For example, the following is an illustration that shows how to open them: You can click on the Hello World button (in the same place where Open App used to exist), and this will show you the whole list of Runtime Apps as shown in the preceding illustration. I clicked on the Camera application and it showed me the source code of its main.js file. It's completely okay if you are daunted by the huge file. If you find these runtime applications interesting and want to contribute to them, then you can refer to Mozilla Developer Network's articles on developing Gaia, which you can find at https://developer.mozilla.org/en-US/Firefox_OS/Developing_Gaia. Our application looks as follows in the App Launcher of the operating system: Uninstalling applications in Firefox OS You can remove the project from WebIDE by clicking on the Remove Project button in the home page of the application. However, this will not uninstall the application from Firefox OS Simulator. The uninstallation system of the operating system is quite similar to iOS. You just have to double tap in OS X to get the Edit screen, from where you can click on the cross button on the top-left of the app icon to uninstall the app. You will then get a confirmation screen that warns you that all the data of the application will also be deleted along with the app. This will take you back to the Edit screen where you can click on Done to get back to the home screen. Summary In this article, you learned about WebIDE, how to install Firefox OS simulator in WebIDE, using Firefox OS and installing applications in it, and creating a skeleton application using WebIDE. You then learned how to use developer tools for applications that run in the simulator, browsing other preinstalled runtime applications present in Firefox OS. Finally, you learned about removing a project from WebIDE and uninstalling an application from the operating system. Resources for Article: Further resources on this subject: Learning Node.js for Mobile Application Development [Article] Introducing Web Application Development in Rails [Article] One-page Application Development [Article]
Read more
  • 0
  • 0
  • 14147

article-image-introduction-scala
Packt
01 Nov 2016
8 min read
Save for later

Introduction to Scala

Packt
01 Nov 2016
8 min read
In this article by Diego Pacheco, the author of the book, Building applications with Scala, we will see the following topics: Writing a program for Scala Hello World using the REPL Scala language – the basics Scala variables – var and val Creating immutable variables (For more resources related to this topic, see here.) Scala Hello World using the REPL Let's get started. Go ahead, open your terminal, and type $ scala in order to open the Scala REPL. Once the REPL is open, you can just type "Hello World". By doing this, you are performing two operations – eval and print. The Scala REPL will create a variable called res0 and store your string there, and then it will print the content of the res0 variable. Scala REPL Hello World program $ scala Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77). Type in expressions for evaluation. Or try :help. scala> "Hello World" res0: String = Hello World scala> Scala is a hybrid language, which means it is both object-oriented (OO) and functional. You can create classes and objects in Scala. Next, we will create a complete Hello World application using classes. Scala OO Hello World program $ scala Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77). Type in expressions for evaluation. Or try :help. scala> object HelloWorld { | def main(args:Array[String]) = println("Hello World") | } defined object HelloWorld scala> HelloWorld.main(null) Hello World scala> First things first, you need to realize that we use the word object instead of class. The Scala language has different constructs, compared with Java. Object is a Singleton in Scala. It's the same as you code the Singleton pattern in Java. Next, we see the word def that is used in Scala to create functions. In this program, we create the main function just as we do in Java, and we call the built-in function, println, in order to print the String Hello World. Scala imports some java objects and packages by default. Coding in Scala does not require you to type, for instance, System.out.println("Hello World"), but you can if you want to, as shown in the following:. $ scala Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77). Type in expressions for evaluation. Or try :help. scala> System.out.println("Hello World") Hello World scala> We can and we will do better. Scala has some abstractions for a console application. We can write this code with less lines of code. To accomplish this goal, we need to extend the Scala class App. When we extend from App, we are performing inheritance, and we don't need to define the main function. We can just put all the code on the body of the class, which is very convenient, and which makes the code clean and simple to read. Scala HelloWorld App in the Scala REPL $ scala Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77). Type in expressions for evaluation. Or try :help. scala> object HelloWorld extends App { | println("Hello World") | } defined object HelloWorld scala> HelloWorld object HelloWorld scala> HelloWorld.main(null) Hello World scala> After coding the HelloWorld object in the Scala REPL, we can ask the REPL what HelloWorld is and, as you might realize, the REPL answers that HelloWorld is an object. This is a very convenient Scala way to code console applications because we can have a Hello World application with just three lines of code. Sadly, the same program in Java requires way more code, as you will see in the next section. Java is a great language for performance, but it is a verbose language compared with Scala. Java Hello World application package scalabook.javacode.chap1; public class HelloWorld { public static void main(String args[]){ System.out.println("Hello World"); } } The Java application required six lines of code, while in Scala, we were able to do the same with 50% less code(three lines of code). This is a very simple application; when we are coding complex applications, the difference gets bigger as a Scala application ends up with way lesser code than that of Java. Remember that we use an object in Scala in order to have a Singleton(Design Pattern that makes sure you have just one instance of a class), and if we want to do the same in Java, the code would be something like this: package scalabook.javacode.chap1; public class HelloWorldSingleton { private HelloWorldSingleton(){} private static class SingletonHelper{ private static final HelloWorldSingleton INSTANCE = new HelloWorldSingleton(); } public static HelloWorldSingleton getInstance(){ return SingletonHelper.INSTANCE; } public void sayHello(){ System.out.println("Hello World"); } public static void main(String[] args) { getInstance().sayHello(); } } It's not just about the size of the code, but it is all about consistency and the language providing more abstractions for you. If you write less code, you will have less bugs in your software at the end of the day. Scala language – the basics Scala is a statically typed language with a very expressive type system, which enforces abstractions in a safe yet coherent manner. All values in Scala are Java objects (but primitives that are unboxed at runtime) because at the end of the day, Scala runs on the Java JVM. Scala enforces immutability as a core functional programing principle. This enforcement happens in multiple aspects of the Scala language, for instance, when you create a variable, you do it in an immutable way, and when you use a collection, you use an immutable collection. Scala also lets you use mutable variables and mutable structures, but it favors immutable ones by design. Scala variables – var and val When you are coding in Scala, you create variables using either the var operator or the val operator. The var operator allows you to create mutable states, which is fine as long as you make it local, stick to the core functional programing principles, and avoid mutable shared state. Using var in the Scala REPL $ scala Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77). Type in expressions for evaluation. Or try :help. scala> var x = 10 x: Int = 10 scala> x res0: Int = 10 scala> x = 11 x: Int = 11 scala> x res1: Int = 11 scala> However, Scala has a more interesting construct called val. Using the val operator makes your variables immutable, which means that you can't change their values after you set them. If you try to change the value of a val variable in Scala, the compiler will give you an error. As a Scala developer, you should use val as much as possible because that's a good functional programing mindset, and it will make your programs better and more correct. In Scala, everything is an object; there are no primitives – the var and val rules apply for everything, be it Int, String, or even a class. Using val in the Scala REPL $ scala Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77). Type in expressions for evaluation. Or try :help. scala> val x = 10 x: Int = 10 scala> x res0: Int = 10 scala> x = 11 <console>:12: error: reassignment to val x = 11 ^ scala> x res1: Int = 10 scala> Creating immutable variables Right. Now let's see how we can define the most common types in Scala, such as Int, Double, Boolean, and String. Remember that you can create these variables using val or var, depending on your requirement. Scala variable types at the Scala REPL $ scala Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77). Type in expressions for evaluation. Or try :help. scala> val x = 10 x: Int = 10 scala> val y = 11.1 y: Double = 11.1 scala> val b = true b: Boolean = true scala> val f = false f: Boolean = false scala> val s = "A Simple String" s: String = A Simple String scala> For these variables, we did not define the type. The Scala language figures it out for us. However, it is possible to specify the type if you want. In Scala, the type comes after the name of the variable, as shown in the following section. Scala variables with explicit typing at the Scala REPL $ scala Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77). Type in expressions for evaluation. Or try :help. scala> val x:Int = 10 x: Int = 10 scala> val y:Double = 11.1 y: Double = 11.1 scala> val s:String = "My String " s: String = "My String " scala> val b:Boolean = true b: Boolean = true scala> Summary In this article, we learned about some basic constructs and concepts of the Scala language, with functions, collections, and OO in Scala. Resources for Article: Further resources on this subject: Making History with Event Sourcing [article] Creating Your First Plug-in [article] Content-based recommendation [article]
Read more
  • 0
  • 0
  • 14114
article-image-deploying-html5-applications-gnome
Packt
28 May 2013
10 min read
Save for later

Deploying HTML5 Applications with GNOME

Packt
28 May 2013
10 min read
(For more resources related to this topic, see here.) Before we start Most of the discussions in this article require a moderate knowledge of HTML5, JSON, and common client-side JavaScript programming. One particular exercise uses JQuery and JQuery Mobile to show how a real HTML5 application will be implemented. Embedding WebKit What we need to learn first is how to embed a WebKit layout engine inside our GTK+ application. Embedding WebKit means we can use HTML and CSS as our user interface instead of GTK+ or Clutter. Time for action – embedding WebKit With WebKitGTK+, this is a very easy task to do; just follow these steps: Create an empty Vala project without GtkBuilder and no license. Name it hello-webkit. Modify configure.ac to include WebKitGTK+ into the project. Find the following line of code in the file: PKG_CHECK_MODULES(HELLO_WEBKIT, [gtk+-3.0]) Remove the previous line and replace it with the following one: PKG_CHECK_MODULES(HELLO_WEBKIT, [gtk+-3.0 webkitgtk-3.0]) Modify Makefile.am inside the src folder to include WebKitGTK into the Vala compilation pipeline. Find the following lines of code in the file: hello_webkit_VALAFLAGS = --pkg gtk+-3.0 Remove it and replace it completely with the following lines: hello_webkit_VALAFLAGS = --vapidir . --pkg gtk+-3.0 --pkg webkit-1.0 --pkglibsoup-2.4 Fill the hello_webkit.vala file inside the src folder with the following lines: using GLib;using Gtk;using WebKit;public class Main : WebView{public Main (){load_html_string("<h1>Hello</h1>","/");}static int main (string[] args){Gtk.init (ref args);var webView = new Main ();var window = new Gtk.Window();window.add(webView);window.show_all ();Gtk.main ();return 0;}} Copy the accompanying webkit-1.0.vapi file into the src folder. We need to do this, unfortunately, because the webkit-1.0.vapi file distributed with many distributions is still using GTK+ Version 2. Run it, you will see a window with the message Hello, as shown in the following screenshot: What just happened? What we need to do first is to include WebKit into our namespace, so we can use all the functions and classes from it. using WebKit; Our class is derived from the WebView widget. It is an important widget in WebKit, which is capable of showing a web page. Showing it means not only parsing and displaying the DOM properly, but that it's capable to run the scripts and handle the styles referred to by the document. The derivation declaration is put in the class declaration as shown next: public class Main : WebView In our constructor, we only load a string and parse it as an HTML document. The string is Hello, styled with level 1 heading. After the execution of the following line, WebKit will parse and display the presentation of the HTML5 code inside its body: public Main (){load_html_string("<h1>Hello</h1>","/");} In our main function, what we need to do is create a window to put our WebView widget into. After adding the widget, we need to call the show_all() function in order to display both the window and the widget. static int main (string[] args){Gtk.init (ref args);var webView = new Main ();var window = new Gtk.Window();window.add(webView); The window content now only has a WebView widget as its sole displaying widget. At this point, we no longer use GTK+ to show our UI, but it is all written in HTML5. Runtime with JavaScriptCore An HTML5 application is, most of the time, accompanied by client-side scripts that are written in JavaScript and a set of styling definition written in CSS3. WebKit already provides the feature of running client-side JavaScript (running the script inside the web page) with a component called JavaScriptCore, so we don't need to worry about it. But how about the connection with the GNOME platform? How to make the client-side script access the GNOME objects? One approach is that we can expose our objects, which are written in Vala so that they can be used by the client-side JavaScript. This is where we will utilize JavaScriptCore. We can think of this as a frontend and backend architecture pattern. All of the code of business process which touch GNOME will reside in the backend. They are all written in Vala and run by the main process. On the opposite side, the frontend, the code is written in JavaScript and HTML5, and is run by WebKit internally. The frontend is what the user sees while the backend is what is going on behind the scene. Consider the following diagram of our application. The backend part is grouped inside a grey bordered box and run in the main process. The frontend is outside the box and run and displayed by WebKit. From the diagram, we can see that the frontend creates an object and calls a function in the created object. The object we create is not defined in the client side, but is actually created at the backend. We ask JavaScriptCore to act as a bridge to connect the object created at the backend to be made accessible by the frontend code. To do this, we wrap the backend objects with JavaScriptCore class and function definitions. For each object we want to make available to frontend, we need to create a mapping in the JavaScriptCore side. In the following diagram, we first map the MyClass object, then the helloFromVala function, then the intFromVala, and so on: Time for action – calling the Vala object from the frontend Now let's try and create a simple client-side JavaScript code and call an object defined at the backend: Create an empty Vala project, without GtkBuilder and no license. Name it hello-jscore. Modify configure.ac to include WebKitGTK+ exactly like our previous experiment. Modify Makefile.am inside the src folder to include WebKitGTK+ and JSCore into the Vala compilation pipeline. Find the following lines of code in the file: hello_jscore_VALAFLAGS = --pkg gtk+-3.0 Remove it and replace it completely with the following lines: hello_jscore_VALAFLAGS = --vapidir . --pkg gtk+-3.0 --pkg webkit-1.0 --pkglibsoup-2.4 --pkg javascriptcore Fill the hello_jscore.vala file inside the src folder with the following lines of code: using GLib;using Gtk;using WebKit;using JSCore;public class Main : WebView{public Main (){load_html_string("<h1>Hello</h1>" +"<script>alert(HelloJSCore.hello())</script>","/");window_object_cleared.connect ((frame, context) => {setup_js_class ((JSCore.GlobalContext) context);});}public static JSCore.Value helloFromVala (Context ctx,JSCore.Object function,JSCore.Object thisObject,JSCore.Value[] arguments,out JSCore.Value exception) {exception = null;var text = new String.with_utf8_c_string ("Hello fromJSCore");return new JSCore.Value.string (ctx, text);}static const JSCore.StaticFunction[] js_funcs = {{ "hello", helloFromVala, PropertyAttribute.ReadOnly },{ null, null, 0 }};static const ClassDefinition js_class = {0, // versionClassAttribute.None, // attribute"HelloJSCore", // classNamenull, // parentClassnull, // static valuesjs_funcs, // static functionsnull, // initializenull, // finalizenull, // hasPropertynull, // getPropertynull, // setPropertynull, // deletePropertynull, // getPropertyNamesnull, // callAsFunctionnull, // callAsConstructornull, // hasInstancenull // convertToType};void setup_js_class (GlobalContext context) {var theClass = new Class (js_class);var theObject = new JSCore.Object (context, theClass,context);var theGlobal = context.get_global_object ();var id = new String.with_utf8_c_string ("HelloJSCore");theGlobal.set_property (context, id, theObject,PropertyAttribute.None, null);}static int main (string[] args){Gtk.init (ref args);var webView = new Main ();var window = new Gtk.Window();window.add(webView);window.show_all ();Gtk.main ();return 0;}} Copy the accompanied webkit-1.0.vapi and javascriptcore.vapi files into the src folder. The javascriptcore.vapi file is needed because some distributions do not have this .vapi file in their repositories. Run the application. The following output will be displayed: What just happened? The first thing we do is include the WebKit and JavaScriptCore namespaces. Note, in the following code snippet, that the JavaScriptCore namespace is abbreviated as JSCore: using WebKit;using JSCore; In the Main function, we load HTML content into the WebView widget. We display a level 1 heading and then call the alert function. The alert function displays a string returned by the hello function inside the HelloJSCore class, as shown in the following code: public Main (){load_html_string("<h1>Hello</h1>" +"<script>alert(HelloJSCore.hello())</script>","/"); In the preceding code snippet, we can see that the client-side JavaScript code is as follows: alert(HelloJSCore.hello()) And we can also see that we call the hello function from the HelloJSCore class as a static function. It means that we don't instantiate the HelloJSCore object before calling the hello function. In WebView, we initialize the class defined in the Vala class when we get the window_object_cleared signal. This signal is emitted whenever a page is cleared. The initialization is done in setup_js_class and this is also where we pass the JSCore global context into. The global context is where JSCore keeps the global variables and functions. It is accessible by every code. window_object_cleared.connect ((frame, context) => {setup_js_class ((JSCore.GlobalContext)context);}); The following snippet of code contains the function, which we want to expose to the clientside JavaScript. The function just returns a Hello from JSCore string message: public static JSCore.Value helloFromVala (Context ctx,JSCore.Object function,JSCore.Object thisObject,JSCore.Value[] arguments,out JSCore.Value exception) {exception = null;var text = new String.with_utf8_c_string ("Hello from JSCore");return new JSCore.Value.string (ctx, text);} Then we need to put a boilerplate code that is needed to expose the function and other members of the class. The first part of the code is the static function index. This is the mapping between the exposed function and the name of the function defined in the wrapper. In the following example, we map the hello function, which can be used in the client side, with the helloFromVala function defined in the code. The index is then ended with null to mark the end of the array: static const JSCore.StaticFunction[] js_funcs = {{ "hello", helloFromVala, PropertyAttribute.ReadOnly },{ null, null, 0 }}; The next part of the code is the class definition. It is about the structure that we have to fill, so that JSCore would know about the class. All of the fields are filled with null, except for those we want to make use of. In this example, we use the static function for the hello function. So we fill the static function field with js_funcs, which we defined in the preceding code snippet: static const ClassDefinition js_class = {0, // versionClassAttribute.None, // attribute"HelloJSCore", // classNamenull, // parentClassnull, // static valuesjs_funcs, // static functionsnull, // initializenull, // finalizenull, // hasPropertynull, // getPropertynull, // setPropertynull, // deletePropertynull, // getPropertyNamesnull, // callAsFunctionnull, // callAsConstructornull, // hasInstancenull // convertToType}; After that, in the the setup_js_class function, we set up the class to be made available in the JSCore global context. First, we create JSCore.Class with the class definition structure we filled previously. Then, we create an object of the class, which is created in the global context. Last but not least, we assign the object with a string identifier, which is HelloJSCore. After executing the following code, we will be able to refer HelloJSCore on the client side: void setup_js_class (GlobalContext context) {var theClass = new Class (js_class);var theObject = new JSCore.Object (context, theClass,context);var theGlobal = context.get_global_object ();var id = new String.with_utf8_c_string ("HelloJSCore");theGlobal.set_property (context, id, theObject,PropertyAttribute.None, null);}
Read more
  • 0
  • 0
  • 14101

Packt
07 Aug 2013
13 min read
Save for later

.NET 4.5 Parallel Extensions – Async

Packt
07 Aug 2013
13 min read
(For more resources related to this topic, see here.) Creating an async method The TAP is a new pattern for asynchronous programming in .NET Framework 4.5. It is based on a task, but in this case a task doesn't represent work which will be performed on another thread. In this case, a task is used to represent arbitrary asynchronous operations. Let's start learning how async and await work by creating a Windows Presentation Foundation (WPF ) application that accesses the web using HttpClient. This kind of network access is ideal for seeing TAP in action. The application will get the contents of a classic book from the web, and will provide a count of the number of words in the book. How to do it… Let's go to Visual Studio 2012 and see how to use the async and await keywords to maintain a responsive UI by doing the web communications asynchronously. Start a new project using the WPF Application project template and assign WordCountAsync as Solution name . Begin by opening MainWindow.xaml and adding the following XAML to create a simple user interface containing Button and TextBlock: <Window x_Class="WordCountAsync.MainWindow" Title="WordCountAsync" Height="350" Width="525"> <Grid> <Button x_Name="StartButton" Content="Start" HorizontalAlignment="Left" Margin="219,195,0,0" VerticalAlignment="Top" Width="75" RenderTransformOrigin="-0.2,0.45" Click="StartButton_Click"/> <TextBlock x_Name="TextResults" HorizontalAlignment="Left" Margin="60,28,0,0" TextWrapping="Wrap" VerticalAlignment="Top" Height="139" Width="411"/> </Grid> </Window> Next, open up MainWindow.xaml.cs. Go to the Project and add a reference to System.Net.Http. Add the following using directives to the top of your MainWindow class: using System; using System.Linq; using System.Net.Http; using System.Threading.Tasks; using System.Windows; At the top of the MainWindow class, add a character array constant that will be used to split the contents of the book into a word array. char[] delimiters = { ' ', ',', '.', ';', ':', '-', '_', '/', '\u000A' }; Add a button click event for the StartButton and add the async modifier to the method signature to indicate that this will be a async method. Please note that async methods that return void are normally only used for event handlers, and should be avoided. private async void StartButton_Click(object sender, RoutedEventArgs e) { } Next, let's create a async method called GetWordCountAsync that returns Task<int>. This method will create HttpClient and call its GetStringAsync method to download the book contents as a string. It will then use the Split method to split the string into a wordArray. We can return the count of the wordArray as our return value. public async Task<int> GetWordCountAsync() { TextResults.Text += "Getting the word count for Origin of Species...\n"; var client = new HttpClient(); var bookContents = await client.GetStringAsync(@"http://www.gutenberg.org/files/2009/2009.txt"); var wordArray = bookContents.Split(delimiters, StringSplitOptions.RemoveEmptyEntries); return wordArray.Count(); } Finally, let's complete the implementation of our button click event. The Click event handler will just call GetWordCountAsync with the await keyword and display the results to TextBlock. private async void StartButton_Click(object sender, RoutedEventArgs e) { var result = await GetWordCountAsync(); TextResults.Text += String.Format("Origin of Species word count: {0}",result); } In Visual Studio 2012, press F5 to run the project. Click on the Start button, and your application should appear as shown in the following screenshot: How it works… In the TAP, asynchronous methods are marked with an async modifier. The async modifier on a method does not mean that the method will be scheduled to run asynchronously on a worker thread. It means that the method contains control flow that involves waiting for the result of an asynchronous operation, and will be rewritten by the compiler to ensure that the asynchronous operation can resume this method at the right spot. Let me try to put this a little more simply. When you add the async modifier to a method, it indicates that the method will wait on an asynchronous code to complete. This is done with the await keyword. The compiler actually takes the code that follows the await keyword in an async method and turns it into a continuation that will run after the result of the async operation is available. In the meantime, the method is suspended, and control returns to the method's caller. If you add the async modifier to a method, and then don't await anything, it won't cause an error. The method will simply run synchronously. An async method can have one of the three return types: void, Task, or Task<TResult>. As mentioned before, a task in this context doesn't mean that this is something that will execute on a separate thread. In this case, task is just a container for the asynchronous work, and in the case of Task<TResult>, it is a promise that a result value of type TResult will show up after the asynchronous operation completes. In our application, we use the async keyword to mark the button click event handler as asynchronous, and then we wait for the GetWordCountAsync method to complete by using the wait keyword. private async void StartButton_Click(object sender, RoutedEventArgs e) { StartButton.Enabled = false; var result = await GetWordCountAsync(); TextResults.Text += String.Format("Origin of Species word count: {0}", .................. result); StartButton.Enabled = true; } The code that follows the await keyword, in this case, the same line of code that updates TextBlock, is turned by the compiler into a continuation that will run after the integer result is available. If the Click event is fired again while this asynchronous task is in progress, another asynchronous task is created and awaited. To prevent this, it is a common practice to disable the button that is clicked. It is a convention to name an asynchronous method with an Async postfix, as we have done with GetWordCountAsync. Handling Exceptions in asynchronous code So how would you add Exception handling to code that is executed asynchronously? In previous asynchronous patterns, this was very difficult to achieve. In C# 5.0 it is much more straightforward because you just have to wrap the asynchronous function call with a standard try/catch block. On the surface this sounds easy, and it is, but there is more going on behind the scene that will be explained right after we build our next example application. For this recipe, we will return to our classic books word count scenario, and we will be handling an Exception thrown by HttpClient when it tries to get the book contents using an incorrect URL. How to do it… Let's build another WPF application and take a look at how to handle Exceptions when something goes wrong in one of our asynchronous methods. Start a new project using the WPF Application project template and assign AsyncExceptions as Solution name . Begin by opening MainWindow.xaml and adding the following XAML to create a simple user interface containing Button and a TextBlock: <Window x_Class="WordCountAsync.MainWindow" Title="WordCountAsync" Height="350" Width="525"> <Grid> <Button x_Name="StartButton" Content="Start" HorizontalAlignment="Left" Margin="219,195,0,0" VerticalAlignment="Top" Width="75" RenderTransformOrigin="-0.2,0.45" Click="StartButton_Click"/> <TextBlock x_Name="TextResults" HorizontalAlignment="Left" Margin="60,28,0,0" TextWrapping="Wrap" VerticalAlignment="Top" Height="139" Width="411"/> </Grid> </Window> Next, open up MainWindow.xaml.cs. Go to the Project Explorer , right-click on References , click on Framework from the menu on the left side of the Reference Manager , and then add a reference to System.Net.Http. Add the following using directives to the top of your MainWindow class: using System; using System.Linq; using System.Net.Http; using System.Threading.Tasks; using System.Windows; At the top of the MainWindow class, add a character array constant that will be used to split the contents of the book into a word array. char[] delimiters = { ' ', ',', '.', ';', ':', '-', '_', '/', '\u000A' }; Now let's create our GetWordCountAsync method. This method will be very similar to the last recipe, but it will be trying to access the book on an incorrect URL. The asynchronous code will be wrapped in a try/catch block to handle Exception. We will also use a finally block to dispose of HttpClient. public async Task<int> GetWordCountAsync() { ResultsTextBlock.Text += "Getting the word count for Origin of Species...\n"; var client = new HttpClient(); try { var bookContents = await client.GetStringAsync(@"http://www.gutenberg.org/files/2009/No_Book_Here.txt"); var wordArray = bookContents.Split(delimiters, StringSplitOptions.RemoveEmptyEntries); return wordArray.Count(); } catch (Exception ex) { ResultsTextBlock.Text += String.Format("An error has occurred: {0} \n", ex.Message); return 0; } finally { client.Dispose(); } } Finally, let create the Click event handler for our StartButton. This is pretty much the same as the last recipe, just wrapped in a try/catch block. Don't forget to add the async modifier to the method signature. private async void StartButton_Click(object sender, RoutedEventArgs e) { try { var result = await GetWordCountAsync(); ResultsTextBlock.Text += String.Format("Origin of Species word count: {0}", result); } catch(Exception ex) { ResultsTextBlock.Text += String.Format("An error has occurred: {0} \n", ex.Message); } } Now, in Visual Studio 2012, press F5 to run the project. Click on the Start button. Your application should appear as shown in the following screenshot: How it works… Wrapping your asynchronous code in a try/catch block is pretty easy. In fact, it hides some of the complex work Visual Studio 2012 to doing for us. To understand this, you need to think about the context in which your code is running. When the TAP is used in Windows Forms or WPF applications, there's already a context that the code is running in, such as the message loop UI thread. When async calls are made in those applications, the awaited code goes off to do its work asynchronously and the async method exits back to its caller. In other words, the program execution returns to the message loop UI thread. The Console applications don't have the concept of a context. When the code hits an awaited call inside the try block, it will exit back to its caller, which in this case is Main. If there is no more code after the awaited call, the application ends without the async method ever finishing. To alleviate this issue, Microsoft included async compatible context with the TAP that is used for Console apps or unit test apps to prevent this inconsistent behavior. This new context is called GeneralThreadAffineContext. Do you really need to understand these context issues to handle async Exceptions? No, not really. That's part of the beauty of the Task-based Asynchronous Pattern. Cancelling an asynchronous operation In .NET 4.5, asynchronous operations can be cancelled in the same way that parallel tasks can be cancelled, by passing in CancellationToken and calling the Cancel method on CancellationTokenSource. In this recipe, we are going to create a WPF application that gets the contents of a classic book over the web and performs a word count. This time though we are going to set up a Cancel button that we can use to cancel the async operation if we don't want to wait for it to finish. How to do it… Let's create a WPF application to show how we can add cancellation to our asynchronous methods. Start a new project using the WPF Application project template and assign AsyncCancellation as Solution name . Begin by opening MainWindow.xaml and adding the following XAML to create our user interface. In this case, the UI contains TextBlock, StartButton, and CancelButton. <Window x_Class="AsyncCancellation.MainWindow" Title="AsyncCancellation" Height="400" Width="599"> <Grid Width="600" Height="400"> <Button x_Name="StartButton" Content="Start" HorizontalAlignment="Left" Margin="142,183,0,0" VerticalAlignment="Top" Width="75" RenderTransformOrigin="-0.2,0.45" Click="StartButton_Click"/> <Button x_Name="CancelButton" Content="Cancel" HorizontalAlignment="Left" Margin="379,185,0,0" VerticalAlignment="Top" Width="75" Click="CancelButton_Click"/> <TextBlock x_Name="TextResult" HorizontalAlignment="Left" Margin="27,24,0,0" TextWrapping="Wrap" VerticalAlignment="Top" Height="135" Width="540"/> </Grid> </Window> Next, open up MainWindow.xaml.cs, click on the Project Explorer , and add a reference to System.Net.Http. Add the following using directives to the top of your MainWindow class: using System; using System.Linq; using System.Net.Http; using System.Threading.Tasks; using System.Windows; At the top of the MainWindow class, add a character array constant that will be used to split the contents of the book into a word array. char[] delimiters = { ' ', ',', '.', ';', ':', '-', '_', '/', '\u000A' }; Next, let's create the GetWordCountAsync method. This method is very similar to the method explained before. It needs to be marked as asynchronous with the async modifier and it returns Task<int>. This time however, the method takes a CancellationToken parameter. We also need to use the GetAsync method of HttpClient instead of the GetStringAsync method, because the former supports cancellation, whereas the latter does not. We will add a small delay in the method so we have time to cancel the operation before the download completes. public async Task<int> GetWordCountAsync(CancellationToken ct) { TextResult.Text += "Getting the word count for Origin of Species...\n"; var client = new HttpClient(); await Task.Delay(500); try { HttpResponseMessage response = await client.GetAsync(@"http://www.gutenberg.org/files/2009/2009.txt", ct); var words = await response.Content.ReadAsStringAsync(); var wordArray = words.Split(delimiters, StringSplitOptions.RemoveEmptyEntries); return wordArray.Count(); } finally { client.Dispose(); } } Now, let's create the Click event handler for our CancelButton. This method just needs to check if CancellationTokenSource is null, and if not, it calls the Cancel method. private void CancelButton_Click(object sender, RoutedEventArgs e) { if (cts != null) { cts.Cancel(); } } Ok, let's finish up by adding a Click event handler for StartButton. This method is the same as explained before, except we also have a catch block that specifically handles OperationCancelledException. Don't forget to mark the method with the async modifier. public async Task<int> GetWordCountAsync(CancellationToken ct) { TextResult.Text += "Getting the word count for Origin of Species...\n"; var client = new HttpClient(); await Task.Delay(500); try { HttpResponseMessage response = await client.GetAsync(@"http://www.gutenberg.org/files/2009/2009.txt", ct); var words = await response.Content.ReadAsStringAsync(); var wordArray = words.Split(delimiters, StringSplitOptions.RemoveEmptyEntries); return wordArray.Count(); } finally { client.Dispose(); } } In Visual Studio 2012, press F5 to run the project Click on the Start button, then the Cancel button. Your application should appear as shown in the following screenshot: How it works… Cancellation is an aspect of user interaction that you need to consider to build a professional async application. In this example, we implemented cancellation by using a Cancel button, which is one of the most common ways to surface cancellation functionality in a GUI application. In this recipe, cancellation follows a very common flow. The caller (start button click event handler) creates a CancellationTokenSource object. private async void StartButton_Click(object sender, RoutedEventArgs e) { cts = new CancellationTokenSource(); ... } The caller calls a cancelable method, and passes CancellationToken from CancellationTokenSource (CancellationTokenSource.Token). public async Task<int> GetWordCountAsync(CancellationToken ct ) { ... HttpResponseMessage response = await client.GetAsync(@"http://www.gutenberg.org/files/2009/2009.txt", ct ); ... } The cancel button click event handler requests cancellation using the CancellationTokenSource object (CancellationTokenSource.Cancel()). private void CancelButton_Click(object sender, RoutedEventArgs e) { if (cts != null) { cts.Cancel(); } } The task acknowledges the cancellation by throwing OperationCancelledException, which we handle in a catch block in the start button click event handler.
Read more
  • 0
  • 0
  • 13941
Modal Close icon
Modal Close icon