Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-create-enterprise-grade-angular-forms-typescript-tutorial
Sugandha Lahoti
04 Jul 2018
11 min read
Save for later

Create enterprise-grade Angular forms in TypeScript [Tutorial]

Sugandha Lahoti
04 Jul 2018
11 min read
Typescript is an open-source programming language which adds optional static typing to Javascript. To give you a flavor of the benefits of TypeScript, let’s have a very quick look at some of the things that TypeScript brings to the table: A compilation step Strong or static typing Type definitions for popular JavaScript libraries Encapsulation Private and public member variable decorators In this article, we will learn how to build forms with typescript. We will cover as much as it takes to build business applications that collect user information. Here is a breakdown of what you should expect from this article: Typed form input and output Form controls Validation Form submission and handling This article is an excerpt from the book, TypeScript 2.x for Angular Developers, written by Chris Nwamba. Creating types for forms We want to try to utilize TypeScript as much as possible, as it simplifies our development process and makes our app behavior more predictable. For this reason, we will create a simple data class to serve as a type for the form values. First, create a new Angular project to follow along with the examples. Then, use the following command to create a new class: ng g class flight The class is generated in the app folder; replace its content with the following data class: export class Flight { constructor( public fullName: string, public from: string, public to: string, public type: string, public adults: number, public departure: Date, public children?: number, public infants?: number, public arrival?: Date, ) {} } This class represents all the values our form (yet to be created) will have. The properties that are succeeded by a question mark (?) are optional, which means that TypeScript will throw no errors when the respective values are not supplied. Before jumping into creating forms, let's start with a clean slate. Replace the app.component.html file with the following: <div class="container"> <h3 class="text-center">Book a Flight</h3> <div class="col-md-offset-3 col-md-6"> <!-- TODO: Form here --> </div> </div> Run the app and leave it running. You should see the following at port 4200 of localhost (remember to include Bootstrap): The form module Now that we have a contract that we want the form to follow, let's now generate the form's component: ng g component flight-form The command also adds the component as a declaration to our App module: import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppComponent } from './app.component'; import { FlightFormComponent } from './flight-form/flight-form.component'; @NgModule({ declarations: [ AppComponent, // Component added after // being generated FlightFormComponent ], imports: [ BrowserModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } What makes Angular forms special and easy to use are functionalities provided out-of-the-box, such as the NgForm directive. Such functionalities are not available in the core browser module but in the form module. Hence, we need to import them: import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; // Import the form module import { FormsModule } from '@angular/forms'; import { AppComponent } from './app.component'; import { FlightFormComponent } from './flight-form/flight-form.component'; @NgModule({ declarations: [ AppComponent, FlightFormComponent ], imports: [ BrowserModule, // Add the form module // to imports array FormsModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } Simply importing and adding FormModule to the imports array is all we needed to do. Two-way binding The perfect time to start showing some form controls using the form component in the browser is right now. Keeping the state in sync between the data layer (model) and the view can be very challenging, but with Angular it's just a matter of using one directive exposed from FormModule: <!-- ./app/flight-form/flight-form.component.html --> <form> <div class="form-group"> <label for="fullName">Full Name</label> <input type="text" class="form-control" [(ngModel)]="flightModel.fullName" name="fullName" > </div> </form> Angular relies on the name attribute internally to carry out binding. For this reason, the name attribute is required. Pay attention to [(ngModel)]="flightModel.fullName"; it's trying to bind a property on the component class to the form. This model will be of the Flight type, which is the class we created earlier: // ./app/flight-form/flight-form.component.ts import { Component, OnInit } from '@angular/core'; import { Flight } from '../flight'; @Component({ selector: 'app-flight-form', templateUrl: './flight-form.component.html', styleUrls: ['./flight-form.component.css'] }) export class FlightFormComponent implements OnInit { flightModel: Flight; constructor() { this.flightModel = new Flight('', '', '', '', 0, '', 0, 0, ''); } ngOnInit() {} } The flightModel property is added to the component as a Flight type and initialized with some default values. Include the component in the app HTML, so it can be displayed in the browser: <div class="container"> <h3 class="text-center">Book a Flight</h3> <div class="col-md-offset-3 col-md-6"> <app-flight-form></app-flight-form> </div> </div> This is what you should have in the browser: To see two-way binding in action, use interpolation to display the value of flightModel.fullName. Then, enter a value and see the live update: <form> <div class="form-group"> <label for="fullName">Full Name</label> <input type="text" class="form-control" [(ngModel)]="flightModel.fullName" name="fullName" > {{flightModel.fullName}} </div> </form> Here is what it looks like: More form fields Let's get hands-on and add the remaining form fields. After all, we can't book a flight by just supplying our names. The from and to fields are going to be select boxes with a list of cities we can fly into and out of. This list of cities will be stored right in our component class, and then we can iterate over it in the template and render it as a select box: export class FlightFormComponent implements OnInit { flightModel: Flight; // Array of cities cities:Array<string> = [ 'Lagos', 'Mumbai', 'New York', 'London', 'Nairobi' ]; constructor() { this.flightModel = new Flight('', '', '', '', 0, '', 0, 0, ''); } } The array stores a few cities from around the world as strings. Let's now use the ngFor directive to iterate over the cities and display them on the form using a select box: <div class="row"> <div class="col-md-6"> <label for="from">From</label> <select type="text" id="from" class="form-control" [(ngModel)]="flightModel.from" name="from"> <option *ngFor="let city of cities" value="{{city}}">{{city}}</option> </select> </div> <div class="col-md-6"> <label for="to">To</label> <select type="text" id="to" class="form-control" [(ngModel)]="flightModel.to" name="to"> <option *ngFor="let city of cities" value="{{city}}">{{city}}</option> </select> </div> </div> Neat and clean! You can open the browser and see it right there: The select drop-down, when clicked, shows a list of cities, as expected: Next, let's add the trip type field (radio buttons), the departure date field (date control), and the arrival date field (date control): <div class="row" style="margin-top: 15px"> <div class="col-md-5"> <label for="" style="display: block">Trip Type</label> <label class="radio-inline"> <input type="radio" name="type" [(ngModel)]="flightModel.type" value="One Way"> One way </label> <label class="radio-inline"> <input type="radio" name="type" [(ngModel)]="flightModel.type" value="Return"> Return </label> </div> <div class="col-md-4"> <label for="departure">Departure</label> <input type="date" id="departure" class="form-control" [(ngModel)]="flightModel.departure" name="departure"> </div> <div class="col-md-3"> <label for="arrival">Arrival</label> <input type="date" id="arrival" class="form-control" [(ngModel)]="flightModel.arrival" name="arrival"> </div> </div> How the data is bound to the controls is very similar to the text and select fields that we created previously. The major difference is the types of control (radio buttons and dates): Lastly, add the number of passengers (adults, children, and infants): <div class="row" style="margin-top: 15px"> <div class="col-md-4"> <label for="adults">Adults</label> <input type="number" id="adults" class="form-control" [(ngModel)]="flightModel.adults" name="adults"> </div> <div class="col-md-4"> <label for="children">Children</label> <input type="number" id="children" class="form-control" [(ngModel)]="flightModel.children" name="children"> </div> <div class="col-md-4"> <label for="infants">Infants</label> <input type="number" id="infants" class="form-control" [(ngModel)]="flightModel.infants" name="infants"> </div> </div> The passengers section are all of the number type because we are just expected to pick the number of passengers coming onboard from each category: Validating the form and form fields Angular greatly simplifies form validation by using its built-in directives and state properties. You can use the state property to check whether a form field has been touched. If it's touched but violates a validation rule, you can use the ngIf directive to display associated errors. Let's see an example of validating the full name field: <div class="form-group"> <label for="fullName">Full Name</label> <input type="text" id="fullName" class="form-control" [(ngModel)]="flightModel.fullName" name="fullName" #name="ngModel" required minlength="6"> </div> We just added three extra significant attributes to our form's full name field: #name, required, and minlength. The #name attribute is completely different from the name attribute in that the former is a template variable that holds information about this given field via the ngModel value while the latter is the usual form input name attribute. In Angular, validation rules are passed as attributes, which is why required and minlength are there. Yes, the fields are validated, but there are no feedbacks to the user on what must have gone wrong. Let's add some error messages to be shown when form fields are violated: <div *ngIf="name.invalid && (name.dirty || name.touched)" class="text-danger"> <div *ngIf="name.errors.required"> Name is required. </div> <div *ngIf="name.errors.minlength"> Name must be at least 6 characters long. </div> </div> The ngIf directive shows these div elements conditionally: If the form field has been touched but there's no value in it, the Name is required error is shown Name must be at least 6 characters long is also shown when the field is touched but the content length is less than 6. The following two screenshots show these error outputs in the browser: A different error is shown when a value is entered but the value text count is not up to 6: Submitting forms We need to consider a few factors before submitting a form: Is the form valid? Is there a handler for the form prior to submission? To make sure that the form is valid, we can disable the Submit button: <form #flightForm="ngForm"> <div class="form-group" style="margin-top: 15px"> <button class="btn btn-primary btn-block" [disabled]="!flightForm.form.valid"> Submit </button> </div> </form> First, we add a template variable called flightForm to the form and then use the variable to check whether the form is valid. If the form is invalid, we disable the button from being clicked: To handle the submission, add an ngSubmit event to the form. This event will be called when the button is clicked: <form #flightForm="ngForm" (ngSubmit)="handleSubmit()"> ... </form> You can now add a class method, handleSubmit, to handle the form submission. A simple log to the console may be just enough for this example: export class FlightFormComponent implements OnInit { flightModel: Flight; cities:Array<string> = [ ... ]; constructor() { this.flightModel = new Flight('', '', '', '', 0, '', 0, 0, ''); } // Handle for submission handleSubmit() { console.log(this.flightModel); } } We discussed about collecting user inputs via forms. We covered important features of forms, such as typed inputs, validation, two-way binding, submission, and so on. All these interesting methods will prepare you for getting started with building business applications. If you liked our article, you may read our book TypeScript 2.x for Angular Developers, to learn to use typed DOM events and event handling among other interesting things to do with Typescript. Typescript 2.9 release candidate is here How to install and configure TypeScript How to work with classes in Typescript
Read more
  • 0
  • 0
  • 30546

article-image-working-spring-tag-libraries
Packt
13 Jul 2016
26 min read
Save for later

Working with Spring Tag Libraries

Packt
13 Jul 2016
26 min read
In this article by Amuthan G, the author of the book Spring MVC Beginners Guide - Second Edition, you are going to learn more about the various tags that are available as part of the Spring tag libraries. (For more resources related to this topic, see here.) After reading this article, you will have a good idea about the following topics: JavaServer Pages Standard Tag Library (JSTL) Serving and processing web forms Form-binding and whitelisting Spring tag libraries JavaServer Pages Standard Tag Library JavaServer Pages (JSP) is a technology that lets you embed Java code inside HTML pages. This code can be inserted by means of <% %> blocks or by means of JSTL tags. To insert Java code into JSP, the JSTL tags are generally preferred, since tags adapt better to their own tag representation of HTML, so your JSP pages will look more readable. JSP even lets you  define your own tags; you must write the code that actually implements the logic of your own tags in Java. JSTL is just a standard tag library provided by Oracle. We can add a reference to the JSTL tag library in our JSP pages as follows: <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%> Similarly, Spring MVC also provides its own tag library to develop Spring JSP views easily and effectively. These tags provide a lot of useful common functionality such as form binding, evaluating errors and outputting messages, and more when we work with Spring MVC. In order to use these, Spring MVC has provided tags in our JSP pages. We must add a reference to that tag library in our JSP pages as follows: <%@taglib prefix="form" uri="http://www.springframework.org/tags/form" %> <%@taglib prefix="spring" uri="http://www.springframework.org/tags" %> These taglib directives declare that our JSP page uses a set of custom tags related to Spring and identify the location of the library. It also provides a means to identify the custom tags in our JSP page. In the taglib directive, the uri attribute value resolves to a location that the servlet container understands and the prefix attribute informs which bits of markup are custom actions. Serving and processing forms In Spring MVC, the process of putting a HTML form element's values into model data is called form binding. The following line is a typical example of how we put data into the Model from the Controller: model.addAttribute(greeting,"Welcome") Similarly, the next line shows how we retrieve that data in the View using a JSTL expression: <p> ${greeting} </p> But what if we want to put data into the Model from the View? How do we retrieve that data in the Controller? For example, consider a scenario where an admin of our store wants to add new product information to our store by filling out and submitting a HTML form. How can we collect the values filled out in the HTML form elements and process them in the Controller? This is where the Spring tag library tags help us to bind the HTML tag element's values to a form backing bean in the Model. Later, the Controller can retrieve the formbacking bean from the Model using the @ModelAttribute (org.springframework.web.bind.annotation.ModelAttribute) annotation. The form backing bean (sometimes called the form bean) is used to store form data. We can even use our domain objects as form beans; this works well when there's a close match between the fields in the form and the properties in our domain object. Another approach is creating separate classes for form beans, which is sometimes called Data Transfer Objects (DTO). Time for action – serving and processing forms The Spring tag library provides some special <form> and <input> tags, which are more or less similar to HTML form and input tags, but have some special attributes to bind form elements’ data with the form backed bean. Let's create a Spring web form in our application to add new products to our product list: Open our ProductRepository interface and add one more method declaration to it as follows: void addProduct(Product product); Add an implementation for this method in the InMemoryProductRepository class as follows: @Override public void addProduct(Product product) { String SQL = "INSERT INTO PRODUCTS (ID, " + "NAME," + "DESCRIPTION," + "UNIT_PRICE," + "MANUFACTURER," + "CATEGORY," + "CONDITION," + "UNITS_IN_STOCK," + "UNITS_IN_ORDER," + "DISCONTINUED) " + "VALUES (:id, :name, :desc, :price, :manufacturer, :category, :condition, :inStock, :inOrder, :discontinued)"; Map<String, Object> params = new HashMap<>(); params.put("id", product.getProductId()); params.put("name", product.getName()); params.put("desc", product.getDescription()); params.put("price", product.getUnitPrice()); params.put("manufacturer", product.getManufacturer()); params.put("category", product.getCategory()); params.put("condition", product.getCondition()); params.put("inStock", product.getUnitsInStock()); params.put("inOrder", product.getUnitsInOrder()); params.put("discontinued", product.isDiscontinued()); jdbcTempleate.update(SQL, params); } Open our ProductService interface and add one more method declaration to it as follows: void addProduct(Product product); And add an implementation for this method in the ProductServiceImpl class as follows: @Override public void addProduct(Product product) { productRepository.addProduct(product); } Open our ProductController class and add two more request mapping methods as follows: @RequestMapping(value = "/products/add", method = RequestMethod.GET) public String getAddNewProductForm(Model model) { Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); return "addProduct"; } @RequestMapping(value = "/products/add", method = RequestMethod.POST) public String processAddNewProductForm(@ModelAttribute("newProduct") Product newProduct) { productService.addProduct(newProduct); return "redirect:/market/products"; } Finally, add one more JSP View file called addProduct.jsp under the  src/main/webapp/WEB-INF/views/ directory and add the following tag reference declaration as the very first line in it: <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%> <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> Now add the following code snippet under the tag declaration line and save addProduct.jsp. Note that I skipped some <form:input> binding tags for some of the fields of the product domain object, but I strongly encourage you to add binding tags for the skipped fields while you are trying out this exercise: <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.0.0/css/bootstrap.min.css"> <title>Products</title> </head> <body> <section> <div class="jumbotron"> <div class="container"> <h1>Products</h1> <p>Add products</p> </div> </div> </section> <section class="container"> <form:form method="POST" modelAttribute="newProduct" class="form-horizontal"> <fieldset> <legend>Add new product</legend> <div class="form-group"> <label class="control-label col-lg-2 col-lg-2" for="productId">Product Id</label> <div class="col-lg-10"> <form:input id="productId" path="productId" type="text" class="form:input-large"/> </div> </div> <!-- Similarly bind <form:input> tag for name,unitPrice,manufacturer,category,unitsInStock and unitsInOrder fields--> <div class="form-group"> <label class="control-label col-lg-2" for="description">Description</label> <div class="col-lg-10"> <form:textarea id="description" path="description" rows = "2"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="discontinued">Discontinued</label> <div class="col-lg-10"> <form:checkbox id="discontinued" path="discontinued"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="condition">Condition</label> <div class="col-lg-10"> <form:radiobutton path="condition" value="New" />New <form:radiobutton path="condition" value="Old" />Old <form:radiobutton path="condition" value="Refurbished" />Refurbished </div> </div> <div class="form-group"> <div class="col-lg-offset-2 col-lg-10"> <input type="submit" id="btnAdd" class="btn btn-primary" value ="Add"/> </div> </div> </fieldset> </form:form> </section> </body> </html> Now run our application and enter the URL: http://localhost:8080/webstore/market/products/add. You will be able to see a web page showing a web form to add product information as shown in the following screenshot:Add a products web form Now enter all the information related to the new product that you want to add and click on the Add button. You will see the new product added in the product listing page under the URL http://localhost:8080/webstore/market/products. What just happened? In the whole sequence, steps 5 and 6 are very important steps that need to be observed carefully. Whatever was mentioned prior to step 5 was very familiar to you I guess. Anyhow, I will give you a brief note on what we did in steps 1 to 4. In step 1, we just created an addProduct method declaration in our ProductRepository interface to add new products. And in step 2, we just implemented the addProduct method in our InMemoryProductRepository class. Steps 3 and 4 are just a Service layer extension for ProductRepository. In step 3, we declared a similar method addProduct in our ProductService and implemented it in step 4 to add products to the repository via the productRepository reference. Okay, coming back to the important step; what we did in step 5 was nothing but adding two request mapping methods, namely getAddNewProductForm and processAddNewProductForm: @RequestMapping(value = "/products/add", method = RequestMethod.GET) public String getAddNewProductForm(Model model) { Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); return "addProduct"; } @RequestMapping(value = "/products/add", method = RequestMethod.POST) public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded) { productService.addProduct(productToBeAdded); return "redirect:/market/products"; } If you observe those methods carefully, you will notice a peculiar thing, that is, both the methods have the same URL mapping value in their @RequestMapping annotations (value = "/products/add"). So if we enter the URL http://localhost:8080/webstore/market/products/add in the browser, which method will Spring MVC  map that request to? The answer lies in the second attribute of the @RequestMapping annotation (method = RequestMethod.GET and method = RequestMethod.POST). Yes if you look again, even though both methods have the same URL mapping, they differ in the request method. So what is happening behind the screen is when we enter the URL http://localhost:8080/webstore/market/products/add in the browser, it is considered as a GET request, so Spring MVC will map that request to the getAddNewProductForm method. Within that method, we simply attach a new empty Product domain object with the model, under the attribute name newProduct. So in the  addproduct.jsp View, we can access that newProduct Model object: Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); Before jumping into the processAddNewProductForm method, let's review the addproduct.jsp View file for some time, so that you understand the form processing flow without confusion. In addproduct.jsp, we just added a <form:form> tag from Spring's tag library: <form:form modelAttribute="newProduct" class="form-horizontal"> Since this special <form:form> tag is coming from a Spring tag library, we need to add a reference to that tag library in our JSP file; that's why we added the following line at the top of the addProducts.jsp file in step 6: <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> In the Spring <form:form> tag, one of the important attributes is modelAttribute. In our case, we assigned the value newProduct as the value of the modelAttribute in the <form:form> tag. If you remember correctly, you can see that this value of the modelAttribute and the attribute name we used to store the newProduct object in the Model from our getAddNewProductForm method are the same. So the newProduct object that we attached to the model from the Controller method (getAddNewProductForm) is now bound to the form. This object is called the form backing bean in Spring MVC. Okay now you should look at every <form:input> tag inside the <form:form>tag. You can observe a common attribute in every tag. That attribute is path: <form:input id="productId" path="productId" type="text" class="form:input-large"/> The path attribute just indicates the field name that is relative to form backing bean. So the value that is entered in this input box at runtime will be bound to the corresponding field of the form bean. Okay, now it’s time to come back and review our processAddNewProductForm method. When will this method be invoked? This method will be invoked once we press the submit button on our form. Yes, since every form submission is considered a POST request, this time the browser will send a POST request to the same URL http://localhost:8080/webstore/products/add. So this time the processAddNewProductForm method will get invoked since it is a POST request. Inside the processAddNewProductForm method, we simply are calling the addProduct service method to add the new product to the repository: productService.addProduct(productToBeAdded); But the interesting question here is how come the productToBeAdded object is populated with the data that we entered in the form? The answer lies in the @ModelAttribute (org.springframework.web.bind.annotation.ModelAttribute) annotation. Notice the method signature of the processAddNewProductForm method: public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded) Here if you look at the value attribute of the @ModelAttribute annotation, you can observe a pattern. Yes, the @ModelAttribute annotation's value and the value of the modelAttribute from the <form:form> tag are the same. So Spring MVC knows that it should assign the form bounded newProduct object to the processAddNewProductForm method's parameter productToBeAdded. The @ModelAttribute annotation is not only used to retrieve a object from the Model, but if we want we can even use the @ModelAttribute annotation to add objects to the Model. For instance, we can even rewrite our getAddNewProductForm method to something like the following with using the @ModelAttribute annotation: @RequestMapping(value = "/products/add", method = RequestMethod.GET) public String getAddNewProductForm(@ModelAttribute("newProduct") Product newProduct) { return "addProduct"; } You can see that we haven't created a new empty Product domain object and attached it to the model. All we did was added a parameter of the type Product and annotated it with the @ModelAttribute annotation, so Spring MVC will know that it should create an object of Product and attach it to the model under the name newProduct. One more thing that needs to be observed in the processAddNewProductForm method is the logical View name it is returning: redirect:/market/products. So what we are trying to tell Spring MVC by returning the string redirect:/market/products? To get the answer, observe the logical View name string carefully; if we split this string with the ":" (colon) symbol, we will get two parts. The first part is the prefix redirect and the second part is something that looks like a request path: /market/products. So, instead of returning a View name, we are simply instructing Spring to issue a redirect request to the request path /market/products, which is the request path for the list method of our ProductController. So after submitting the form, we list the products using the list method of ProductController. As a matter of fact, when we return any request path with the redirect: prefix from a request mapping method, Spring will use a special View object called RedirectView (org.springframework.web.servlet.view.RedirectView) to issue the redirect command behind the screen. Instead of landing on a web page after the successful submission of a web form, we are spawning a new request to the request path /market/products with the help of RedirectView. This pattern is called redirect-after-post, which is a common pattern to use with web-based forms. We are using this pattern to avoid double submission of the same form. Sometimes after submitting the form, if we press the browser's refresh button or back button, there are chances to resubmit the same form. This behavior is called double submission. Have a go hero – customer registration form It is great that we created a web form to add new products to our web application under the URL http://localhost:8080/webstore/market/products/add. Why don't you create a customer registration form in our application to register a new customer in our application? Try to create a customer registration form under the URL http://localhost:8080/webstore/customers/add. Customizing data binding In the last section, you saw how to bind data submitted by a HTML form to a form backing bean. In order to do the binding, Spring MVC internally uses a special binding object called WebDataBinder (org.springframework.web.bind.WebDataBinder). WebDataBinder extracts the data out of the HttpServletRequest object and converts it to a proper data format, loads it into a form backing bean, and validates it. To customize the behavior of data binding, we can initialize and configure the WebDataBinder object in our Controller. The @InitBinder (org.springframework.web.bind.annotation.InitBinder) annotation helps us to do that. The @InitBinder annotation designates a method to initialize WebDataBinder. Let's look at a practical use of customizing WebDataBinder. Since we are using the actual domain object itself as form backing bean, during the form submission there is a chance for security vulnerabilities. Because Spring automatically binds HTTP parameters to form bean properties, an attacker could bind a suitably-named HTTP parameter with form properties that weren't intended for binding. To address this problem, we can explicitly tell Spring which fields are allowed for form binding. Technically speaking, the process of explicitly telling which fields are allowed for binding is called whitelisting binding in Spring MVC; we can do whitelisting binding using WebDataBinder. Time for action – whitelisting form fields for binding In the previous exercise while adding a new product, we bound every field of the Product domain in the form, but it is meaningless to specify unitsInOrder and discontinued values during the addition of a new product because nobody can make an order before adding the product to the store and similarly discontinued products need not be added in our product list. So we should not allow these fields to be bounded with the form bean while adding a new product to our store. However all the other fields of the Product domain object to be bound. Let's see how to this with the following steps: Open our ProductController class and add a method as follows: @InitBinder public void initialiseBinder(WebDataBinder binder) { binder.setAllowedFields("productId", "name", "unitPrice", "description", "manufacturer", "category", "unitsInStock", "condition"); } Add an extra parameter of the type BindingResult (org.springframework.validation.BindingResult) to the processAddNewProductForm method as follows: public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded, BindingResult result) In the same processAddNewProductForm method, add the following condition just before the line saving the productToBeAdded object: String[] suppressedFields = result.getSuppressedFields(); if (suppressedFields.length > 0) { throw new RuntimeException("Attempting to bind disallowed fields: " + StringUtils.arrayToCommaDelimitedString(suppressedFields)); } Now run our application and enter the URL http://localhost:8080/webstore/market/products/add. You will be able to see a web page showing a web form to add new product information. Fill out all the fields, particularly Units in order and discontinued. Now press the Add button and you will see a HTTP status 500 error on the web page as shown in the following image: The add product page showing an error for disallowed fields Now open addProduct.jsp from /Webshop/src/main/webapp/WEB-INF/views/ in your project and remove the input tags that are related to the Units in order and discontinued fields. Basically, you need to remove the following block of code: <div class="form-group"> <label class="control-label col-lg-2" for="unitsInOrder">Units In Order</label> <div class="col-lg-10"> <form:input id="unitsInOrder" path="unitsInOrder" type="text" class="form:input-large"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="discontinued">Discontinued</label> <div class="col-lg-10"> <form:checkbox id="discontinued" path="discontinued"/> </div> </div> Now run our application again and enter the URL http://localhost:8080/webstore/market/products/add. You will be able to see a web page showing a web form to add a new product, but this time without the Units in order and Discontinued fields. Now enter all information related to the new product and click on the Add button. You will see the new product added in the product listing page under the URL http://localhost:8080/webstore/market/products. What just happened? Our intention was to put some restrictions on binding HTTP parameters with the form baking bean. As we already discussed, the automatic binding feature of Spring could lead to a potential security vulnerability if we used a domain object itself as form bean. So we have to explicitly tell Spring MVC which are fields are allowed. That's what we are doing in step 1. The @InitBinder annotation designates a Controller method as a hook method to do some custom configuration regarding data binding on the WebDataBinder. And WebDataBinder is the thing that is doing the data binding at runtime, so we need to tell which fields are allowed to bind to WebDataBinder. If you observe our initialiseBinder method from ProductController, it has a parameter called binder, which is of the type WebDataBinder. We are simply calling the setAllowedFields method on the binder object and passing the field names that are allowed for binding. Spring MVC will call this method to initialize WebDataBinder before doing the binding since it has the @InitBinder annotation. WebDataBinder also has a method called setDisallowedFields to strictly specify which fields are disallowed for binding . If you use this method, Spring MVC allows any HTTP request parameters to be bound except those fields names specified in the setDisallowedFields method. This is called blacklisting binding. Okay, we configured which the allowed fields are for binding, but we need to verify whether any fields other than those allowed are bound with the form baking bean. That's what we are doing in steps 2 and 3. We changed processAddNewProductForm by adding one extra parameter called result, which is of the type BindingResult. Spring MVC will fill this object with the result of the binding. If any attempt is made to bind any fields other than the allowed fields, the BindingResult object will have a getSuppressedFields count greater than zero. That's why we were checking the suppressed field count and throwing a RuntimeException exception: if (suppressedFields.length > 0) { throw new RuntimeException("Attempting to bind disallowed fields: " + StringUtils.arrayToCommaDelimitedString(suppressedFields)); } Here the static class StringUtils comes from org.springframework.util.StringUtils. We want to ensure that our binding configuration is working—that's why we run our application without changing the View file addProduct.jsp in step 4. And as expected, we got the HTTP status 500 error saying Attempting to bind disallowed fields when we submit the Add products form with the unitsInOrder and discontinued fields filled out. Now we know our binder configuration is working, we could change our View file so not to bind the disallowed fields—that's what we were doing in step 6; just removing the input field elements that are related to the disallowed fields from the addProduct.jsp file. After that, our added new products page just works fine, as expected. If any of the outside attackers try to tamper with the POST request and attach a HTTP parameter with the same field name as the form baking bean, they will get a RuntimeException. The whitelisting is just an example of how can we customize the binding with the help of WebDataBinder. But by using WebDataBinder, we can perform many more types of binding customization as well. For example, WebDataBinder internally uses many PropertyEditor (java.beans.PropertyEditor) implementations to convert the HTTP request parameters to the target field of the form backing bean. We can even register custom PropertyEditor objects with WebDataBinder to convert more complex data types. For instance, look at the following code snippet that shows how to register the custom PropertyEditor to convert a Date class: @InitBinder public void initialiseBinder (WebDataBinder binder) { DateFormat dateFormat = new SimpleDateFormat("MMM d, YYYY"); CustomDateEditor orderDateEditor = new CustomDateEditor(dateFormat, true); binder.registerCustomEditor(Date.class, orderDateEditor); } There are many advanced configurations we can make with WebDataBinder in terms of data binding, but for a beginner level, we don’t need to go that deep. Pop quiz – data binding Considering the following data binding customization and identify the possible matching field bindings: @InitBinder public void initialiseBinder(WebDataBinder binder) { binder.setAllowedFields("unit*"); } NoOfUnit unitPrice priceUnit united Externalizing text messages So far in all our View files, we hardcoded text values for all the labels; for instance, take our addProduct.jsp file—for the productId input tag, we have a label tag with the hardcoded text value as Product id: <label class="control-label col-lg-2 col-lg-2" for="productId">Product Id</label> Externalizing these texts from a View file into a properties file will help us to have a single centralized control for all label messages. Moreover, it will help us to make our web pages ready for internationalization. But in order to perform internalization, we need to externalize the label messages first. So now you are going to see how to externalize locale-sensitive text messages from a web page to a property file. Time for action – externalizing messages Let's externalize the labels texts in our addProduct.jsp: Open our addProduct.jsp file and add the following tag lib reference at the top: <%@ taglib prefix="spring" uri="http://www.springframework.org/tags" %> Change the product ID <label> tag's value ID to <spring:message code="addProdcut.form.productId.label"/>. After changing your product ID <label> tag's value, it should look as follows: <label class="control-label col-lg-2 col-lg-2" for="productId"> <spring:message code="addProduct.form.productId.label"/> </label> Create a file called messages.properties under /src/main/resources in your project and add the following line to it: addProduct.form.productId.label = New Product ID Now open our web application context configuration file WebApplicationContextConfig.java and add the following bean definition to it: @Bean public MessageSource messageSource() { ResourceBundleMessageSource resource = new ResourceBundleMessageSource(); resource.setBasename("messages"); return resource; } Now run our application again and enter the URL http://localhost:8080/webstore/market/products/add. You will be able to see the added product page with the product ID label showing as New Product ID. What just happened? Spring MVC has a special a tag called <spring:message> to externalize texts from JSP files. In order to use this tag, we need to add a reference to a Spring tag library—that's what we did in step 1. We just added a reference to the Spring tag library in our addProduct.jsp file: <%@ taglib prefix="spring" uri="http://www.springframework.org/tags" %> In step 2, we just used that tag to externalize the label text of the product ID input tag: <label class="control-label col-lg-2 col-lg-2" for="productId"> <spring:message code="addProduct.form.productId.label"/> </label> Here, an important thing you need to remember is the code attribute of <spring:message> tag, we have assigned the value addProduct.form.productId.label as the code for this <spring:message> tag. This code attribute is a kind of key; at runtime Spring will try to read the corresponding value for the given key (code) from a message source property file. We said that Spring will read the message’s value from a message source property file, so we need to create that file property file. That's what we did in step 3. We just created a property file with the name messages.properties under the resource directory. Inside that file, we just assigned the label text value to the message tag code: addProduct.form.productId.label = New Product ID Remember for demonstration purposes I just externalized a single label, but a typical web application will have externalized messages  for almost all tags; in that case messages messages.properties file will have many code-value pair entries. Okay, we created a message source property file and added the <spring:message> tag in our JSP file, but to connect these two, we need to create one more Spring bean in our web application context for the org.springframework.context.support.ResourceBundleMessageSource class with the name messageSource—we did that in step 4: @Bean public MessageSource messageSource() { ResourceBundleMessageSource resource = new ResourceBundleMessageSource(); resource.setBasename("messages"); return resource; } One important property you need to notice here is the basename property; we assigned the value messages for that property. If you remember, this is the name of the property file that we created in step 3. That is all we did to enable the externalizing of messages in a JSP file. Now if we run the application and open up the Add products page, you can see that the product ID label will have the same text as we assigned to the  addProdcut.form.productId.label code in the messages.properties file. Have a go hero – externalize all the labels from all the pages I just showed you how to externalize the message for a single label; you can now do that for every single label available in all the pages. Summary At the start of this article, you saw how to serve and process forms, and you learned how to bind form data with a form backing bean. You also learned how to read a bean in the Controller. After that, we went a little deeper into the form bean binding and configured the binder in our Controller to whitelist some of the POST parameters from being bound to the form bean. Finally, you saw how to use one more Spring special tag <spring:message> to externalize the messages in a JSP file. Resources for Article: Further resources on this subject: Designing your very own ASP.NET MVC Application[article] Mixing ASP.NET Webforms and ASP.NET MVC[article] ASP.NET MVC Framework[article]
Read more
  • 0
  • 1
  • 30384

article-image-gearing-bootstrap-4
Packt
12 Sep 2016
28 min read
Save for later

Gearing Up for Bootstrap 4

Packt
12 Sep 2016
28 min read
In this article by Benjamin Jakobus and Jason Marah, the authors of the book Mastering Bootstrap 4, we will be discussing the key points about Bootstrap as a web development framework that helps developers build web interfaces. Originally conceived at Twitter in 2011 by Mark Otto and Jacob Thornton, the framework is now open source and has grown to be one of the most popular web development frameworks to date. Being freely available for private, educational, and commercial use meant that Bootstrap quickly grew in popularity. Today, thousands of organizations rely on Bootstrap, including NASA, Walmart, and Bloomberg. According to BuiltWith.com, over 10% of the world's top 1 million websites are built using Bootstrap (http://trends.builtwith.com/docinfo/Twitter-Bootstrap). As such, knowing how to use Bootstrap will be an important skill and serve as a powerful addition to any web developer’s tool belt. (For more resources related to this topic, see here.) The framework itself consists of a mixture of JavaScript and CSS, and provides developers with all the essential components required to develop a fully functioning web user interface. Over the course of the book, we will be introducing you to all of the most essential features that Bootstrap has to offer by teaching you how to use the framework to build a complete website from scratch. As CSS and HTML alone are already the subject of entire books in themselves, we assume that you, the reader, has at least a basic knowledge of HTML, CSS, and JavaScript. We begin this article by introducing you to our demo website—MyPhoto. This website will accompany us throughout the book, and serve as a practical point of reference. Therefore, all lessons learned will be taught within the context of MyPhoto. We will then discuss the Bootstrap framework, listing its features and contrasting the current release to the last major release (Bootstrap 3). Last but not least, this article will help you set up your development environment. To ensure equal footing, we will guide you towards installing the right build tools, and precisely detail the various ways in which you can integrate Bootstrap into a project. To summarize, this article will do the following: Introduce you to what exactly we will be doing Explain what is new in the latest version of Bootstrap, and how the latest version differs to the previous major release Show you how to include Bootstrap in our web project Introducing our demo project The book will teach you how to build a complete Bootstrap website from scratch. We will build and improve the website's various sections as we progress through the book. The concept behind our website is simple. To develop a landing page for photographers. Using this landing page, (hypothetical) users will be able to exhibit their wares and services. While building our website, we will be making use of the same third-party tools and libraries that you would if you were working as a professional software developer. We chose these tools and plugins specifically because of their widespread use. Learning how to use and integrate them will save you a lot of work when developing websites in the future. Specifically, the tools that we will use to assist us throughout the development of MyPhoto are Bower, node package manager (npm) and Grunt. From a development perspective, the construction of MyPhoto will teach you how to use and apply all of the essential user interface concepts and components required to build a fully functioning website. Among other things, you will learn how to do the following: Use the Bootstrap grid system to structure the information presented on your website. Create a fixed, branded, navigation bar with animated scroll effects. Use an image carousel for displaying different photographs, implemented using Bootstrap's carousel.js and jumbotron (jumbotron is a design principle for displaying important content). It should be noted that carousels are becoming an increasingly unpopular design choice, however, they are still heavily used and are an important feature of Bootstrap. As such, we do not argue for or against the use of carousels as their effectiveness depends very much on how they are used, rather than on whether they are used. Build custom tabs that allow users to navigate across different contents. Use and apply Bootstrap's modal dialogs. Apply a fixed page footer. Create forms for data entry using Bootstrap's input controls (text fields, text areas, and buttons) and apply Bootstrap's input validation styles. Make best use of Bootstrap's context classes. Create alert messages and learn how to customize them. Rapidly develop interactive data tables for displaying product information. How to use drop-down menus, custom fonts, and icons. In addition to learning how to use Bootstrap 4, the development of MyPhoto will introduce you to a range of third-party libraries such as Scrollspy (for scroll animations), SalvattoreJS (a library for complementing our Bootstrap grid), Animate.css (for beautiful CSS animations, such as fade-in effects at https://daneden.github.io/animate.css/) and Bootstrap DataTables (for rapidly displaying data in tabular form). The website itself will consist of different sections: A Welcome section An About section A Services section A Gallery section A Contact Us section The development of each section is intended to teach you how to use a distinct set of features found in third-party libraries. For example, by developing the Welcome section, you will learn how to use Bootstrap's jumbotron and alert dialogs along with different font and text styles, while the About section will show you how to use cards. The Services section of our project introduces you to Bootstrap's custom tabs. That is, you will learn how to use Bootstrap's tabs to display a range of different services offered by our website. Following on from the Services section, you will need to use rich imagery to really show off the website's sample services. You will achieve this by really mastering Bootstrap's responsive core along with Bootstrap's carousel and third-party jQuery plugins. Last but not least, the Contact Us section will demonstrate how to use Bootstrap's form elements and helper functions. That is, you will learn how to use Bootstrap to create stylish HTML forms, how to use form fields and input groups, and how to perform data validation. Finally, toward the end of the book, you will learn how to optimize your website, and integrate it with the popular JavaScript frameworks AngularJS (https://angularjs.org/) and React (http://facebook.github.io/react/). As entire books have been written on AngularJS alone, we will only cover the essentials required for the integration itself. Now that you have glimpsed a brief overview of MyPhoto, let’s examine Bootstrap 4 in more detail, and discuss what makes it so different to its predecessor. Take a look at the following screenshot: Figure 1.1: A taste of what is to come: the MyPhoto landing page. What Bootstrap 4 Alpha 4 has to offer Much has changed since Twitter’s Bootstrap was first released on August 19th, 2011. In essence, Bootstrap 1 was a collection of CSS rules offering developers the ability to lay out their website, create forms, buttons, and help with general appearance and site navigation. With respect to these core features, Bootstrap 4 Alpha 4 is still much the same as its predecessors. In other words, the framework's focus is still on allowing developers to create layouts, and helping to develop a consistent appearance by providing stylings for buttons, forms, and other user interface elements. How it helps developers achieve and use these features however, has changed entirely. Bootstrap 4 is a complete rewrite of the entire project, and, as such, ships with many fundamental differences to its predecessors. Along with Bootstrap's major features, we will be discussing the most striking differences between Bootstrap 3 and Bootstrap 4 in the sub sections below. Layout Possibly the most important and widely used feature is Bootstrap's ability to lay out and organize your page. Specifically, Bootstrap offers the following: Responsive containers. Responsive breakpoints for adjusting page layout in response to differing screen sizes. A 12 column grid layout for flexibly arranging various elements on your page. Media objects that act as building blocks and allow you to build your own structural components. Utility classes that allow you to manipulate elements in a responsive manner. For example, you can use the layout utility classes to hide elements, depending on screen size. Content styling Just like its predecessor, Bootstrap 4 overrides the default browser styles. This means that many elements, such as lists or headings, are padded and spaced differently. The majority of overridden styles only affect spacing and positioning, however, some elements may also have their border removed. The reason behind this is simple. To provide users with a clean slate upon which they can build their site. Building on this clean slate, Bootstrap 4 provides styles for almost every aspect of your webpage such as buttons (Figure 1.2), input fields, headings, paragraphs, special inline texts, such as keyboard input (Figure 1.3), figures, tables, and navigation controls. Aside from this, Bootstrap offers state styles for all input controls, for example, styles for disabled buttons or toggled buttons. Take a look at the following screenshot: Figure 1.2: The six button styles that come with Bootstrap 4 are btn-primary,btn-secondary, btn-success,btn-danger, btn-link,btn-info, and btn-warning. Take a look at the following screenshot: Figure 1.3: Bootstrap's content styles. In the preceding example, we see inline styling for denoting keyboard input. Components Aside from layout and content styling, Bootstrap offers a large variety of reusable components that allow you to quickly construct your website's most fundamental features. Bootstrap's UI components encompass all of the fundamental building blocks that you would expect a web development toolkit to offer: Modal dialogs, progress bars, navigation bars, tooltips, popovers, a carousel, alerts, drop-down menu, input groups, tabs, pagination, and components for emphasizing certain contents. Let's have a look at the following modal dialog screenshot: Figure 1.4: Various Bootstrap 4 components in action. In the screenshot above we see a sample modal dialog, containing an info alert, some sample text, and an animated progress bar. Mobile support Similar to its predecessor, Bootstrap 4 allows you to create mobile friendly websites without too much additional development work. By default, Bootstrap is designed to work across all resolutions and screen sizes, from mobile, to tablet, to desktop. In fact, Bootstrap's mobile first design philosophy implies that its components must display and function correctly at the smallest screen size possible. The reasoning behind this is simple. Think about developing a website without consideration for small mobile screens. In this case, you are likely to pack your website full of buttons, labels, and tables. You will probably only discover any usability issues when a user attempts to visit your website using a mobile device only to find a small webpage that is crowded with buttons and forms. At this stage, you will be required to rework the entire user interface to allow it to render on smaller screens. For precisely this reason, Bootstrap promotes a bottom-up approach, forcing developers to get the user interface to render correctly on the smallest possible screen size, before expanding upwards. Utility classes Aside from ready-to-go components, Bootstrap offers a large selection of utility classes that encapsulate the most commonly needed style rules. For example, rules for aligning text, hiding an element, or providing contextual colors for warning text. Cross-browser compatibility Bootstrap 4 supports the vast majority of modern browsers, including Chrome, Firefox, Opera, Safari, Internet Explorer (version 9 and onwards; Internet Explorer 8 and below are not supported), and Microsoft Edge. Sass instead of Less Both Less and Sass (Syntactically Awesome Stylesheets) are CSS extension languages. That is, they are languages that extend the CSS vocabulary with the objective of making the development of many, large, and complex style sheets easier. Although Less and Sass are fundamentally different languages, the general manner in which they extend CSS is the same—both rely on a preprocessor. As you produce your build, the preprocessor is run, parsing the Less/Sass script and turning your Less or Sass instructions into plain CSS. Less is the official Bootstrap 3 build, while Bootstrap 4 has been developed from scratch, and is written entirely in Sass. Both Less and Sass are compiled into CSS to produce a single file, bootstrap.css. It is this CSS file that we will be primarily referencing throughout this book (with the exception of Chapter 3, Building the Layout). Consequently, you will not be required to know Sass in order to follow this book. However, we do recommend that you take a 20 minute introductory course on Sass if you are completely new to the language. Rest assured, if you already know CSS, you will not need more time than this. The language's syntax is very close to normal CSS, and its elementary concepts are similar to those contained within any other programming language. From pixel to root em Unlike its predecessor, Bootstrap 4 no longer uses pixel (px) as its unit of typographic measurement. Instead, it primarily uses root em (rem). The reasoning behind choosing rem is based on a well known problem with px, that is websites using px may render incorrectly, or not as originally intended, as users change the size of the browser's base font. Using a unit of measurement that is relative to the page's root element helps address this problem, as the root element will be scaled relative to the browser's base font. In turn, a page will be scaled relative to this root element. Typographic units of measurement Simply put, typographic units of measurement determine the size of your font and elements. The most commonly used units of measurement are px and em. The former is an abbreviation for pixel, and uses a reference pixel to determine a font's exact size. This means that, for displays of 96 dots per inch (dpi), 1 px will equal an actual pixel on the screen. For higher resolution displays, the reference pixel will result in the px being scaled to match the display's resolution. For example, specifying a font size of 100 px will mean that the font is exactly 100 pixels in size (on a display with 96 dpi), irrespective of any other element on the page. Em is a unit of measurement that is relative to the parent of the element to which it is applied. So, for example, if we were to have two nested div elements, the outer element with a font size of 100 px and the inner element with a font size of 2 em, then the inner element's font size would translate to 200 px (as in this case 1 em = 100 px). The problem with using a unit of measurement that is relative to parent elements is that it increases your code's complexity, as the nesting of elements makes size calculations more difficult. The recently introduced rem measurement aims to address both em's and px's shortcomings by combining their two strengths—instead of being relative to a parent element, rem is relative to the page's root element. No more support for Internet Explorer 8 As was already implicit in the feature summary above, the latest version of Bootstrap no longer supports Internet Explorer 8. As such, the decision to only support newer versions of Internet Explorer was a reasonable one, as not even Microsoft itself provides technical support and updates for Internet Explorer 8 anymore (as of January 2016). Furthermore, Internet Explorer 8 does not support rem, meaning that Bootstrap 4 would have been required to provide a workaround. This in turn would most likely have implied a large amount of additional development work, with the potential for inconsistencies. Lastly, responsive website development for Internet Explorer 8 is difficult, as the browser does not support CSS media queries. Given these three factors, dropping support for this version of Internet Explorer was the most sensible path of action. A new grid tier Bootstrap's grid system consists of a series of CSS classes and media queries that help you lay out your page. Specifically, the grid system helps alleviate the pain points associated with horizontal and vertical positioning of a page's contents and the structure of the page across multiple displays. With Bootstrap 4, the grid system has been completely overhauled, and a new grid tier has been added with a breakpoint of 480 px and below. We will be talking about tiers, breakpoints, and Bootstrap's grid system extensively in this book. Bye-bye GLYPHICONS Bootstrap 3 shipped with a nice collection of over 250 font icons, free of use. In an effort to make the framework more lightweight (and because font icons are considered bad practice), the GLYPHICON set is no longer available in Bootstrap 4. Bigger text – No more panels, wells, and thumbnails The default font size in Bootstrap 4 is 2 px bigger than in its predecessor, increasing from 14 px to 16 px. Furthermore, Bootstrap 4 replaced panels, wells, and thumbnails with a new concept—cards. To readers unfamiliar with the concept of wells, a well is a UI component that allows developers to highlight text content by applying an inset shadow effect to the element to which it is applied. A panel too serves to highlight information, but by applying padding and rounded borders. Cards serve the same purpose as their predecessors, but are less restrictive as they are flexible enough to support different types of content, such as images, lists, or text. They can also be customized to use footers and headers. Take a look at the following screenshot: Figure 1.5: The Bootstrap 4 card component replaces existing wells, thumbnails, and panels. New and improved form input controls Bootstrap 4 introduces new form input controls—a color chooser, a date picker, and a time picker. In addition, new classes have been introduced, improving the existing form input controls. For example, Bootstrap 4 now allows for input control sizing, as well as classes for denoting block and inline level input controls. However, one of the most anticipated new additions is Bootstrap's input validation styles, which used to require third-party libraries or a manual implementation, but are now shipped with Bootstrap 4 (see Figure 1.6 below). Take a look at the following screenshot: Figure 1.6: The new Bootstrap 4 input validation styles, indicating the successful processing of input. Last but not least, Bootstrap 4 also offers custom forms in order to provide even more cross-browser UI consistency across input elements (Figure 1.7). As noted in the Bootstrap 4 Alpha 4 documentation, the input controls are: "built on top of semantic and accessible markup, so they're solid replacements for any default form control" – Source: http://v4-alpha.getbootstrap.com/components/forms/ Take a look at the following screenshot: Figure 1.7: Custom Bootstrap input controls that replace the browser defaults in order to ensure cross-browser UI consistency. Customization The developers behind Bootstrap 4 have put specific emphasis on customization throughout the development of Bootstrap 4. As such, many new variables have been introduced that allow for the easy customization of Bootstrap. Using the $enabled-*- Sass variables, one can now enable or disable specific global CSS preferences. Setting up our project Now that we know what Bootstrap has to offer, let us set up our project: Create a new project directory named MyPhoto. This will become our project root directory. Create a blank index.html file and insert the following HTML code: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <meta http-equiv="x-ua-compatible" content="ie=edge"> <title>MyPhoto</title> </head> <body> <div class="alert alert-success"> Hello World! </div> </body> </html> Note the three meta tags: The first tag tells the browser that the document in question is utf-8 encoded. Since Bootstrap optimizes its content for mobile devices, the subsequent meta tag is required to help with viewport scaling. The last meta tag forces the document to be rendered using the latest document rendering mode available if viewed in Internet Explorer. Open the index.html in your browser. You should see just a blank page with the words Hello World. Now it is time to include Bootstrap. At its core, Bootstrap is a glorified CSS style sheet. Within that style sheet, Bootstrap exposes very powerful features of CSS with an easy-to-use syntax. It being a style sheet, you include it in your project as you would with any other style sheet that you might develop yourself. That is, open the index.html and directly link to the style sheet. Viewport scaling The term viewport refers to the available display size to render the contents of a page. The viewport meta tag allows you to define this available size. Viewport scaling using meta tags was first introduced by Apple and, at the time of writing, is supported by all major browsers. Using the width parameter, we can define the exact width of the user's viewport. For example, <meta name="viewport" content="width=320px"> will instruct the browser to set the viewport's width to 320 px. The ability to control the viewport's width is useful when developing mobile-friendly websites; by default, mobile browsers will attempt to fit the entire page onto their viewports by zooming out as far as possible. This allows users to view and interact with websites that have not been designed to be viewed on mobile devices. However, as Bootstrap embraces a mobile-first design philosophy, a zoom out will, in fact, result in undesired side-effects. For example, breakpoints will no longer work as intended, as they now deal with the zoomed out equivalent of the page in question. This is why explicitly setting the viewport width is so important. By writing content="width=device-width, initial-scale=1, shrink-to-fit=no", we are telling the browser the following: To set the viewport's width equal to whatever the actual device's screen width is. We do not want any zoom, initially. We do not wish to shrink the content to fit the viewport. For now, we will use the Bootstrap builds hosted on Bootstrap's official Content Delivery Network (CDN). This is done by including the following HTML tag into the head of your HTML document (the head of your HTML document refers to the contents between the <head> opening tag and the </head> closing tag): <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.4/css/bootstrap.min.css"> Bootstrap relies on jQuery, a JavaScript framework that provides a layer of abstraction in an effort to simplify the most common JavaScript operations (such as element selection and event handling). Therefore, before we include the Bootstrap JavaScript file, we must first include jQuery. Both inclusions should occur just before the </body> closing tag: <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.4/jquery.min.js"> </script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.4/js/bootstrap.min.js"></script> Note that, while these scripts could, of course, be loaded at the top of the page, loading scripts at the end of the document is considered best practice to speed up page loading times and to avoid JavaScript issues preventing the page from being rendered. The reason behind this is that browsers do not download all dependencies in parallel (although a certain number of requests are made asynchronously, depending on the browser and the domain). Consequently, forcing the browser to download dependencies early on will block page rendering until these assets have been downloaded. Furthermore, ensuring that your scripts are loaded last will ensure that once you invoke Document Object Model (DOM) operations in your scripts, you can be sure that your page's elements have already been rendered. As a result, you can avoid checks that ensure the existence of given elements. What is a Content Delivery Network? The objective behind any Content Delivery Network (CDN) is to provide users with content that is highly available. This means that a CDN aims to provide you with content, without this content ever (or rarely) becoming unavailable. To this end, the content is often hosted using a large, distributed set of servers. The BootstrapCDN basically allows you to link to the Bootstrap style sheet so that you do not have to host it yourself. Save your changes and reload the index.html in your browser. The Hello World string should now contain a green background: Figure 1.5: Our "Hello World" styled using Bootstrap 4. Now that the Bootstrap framework has been included in our project, open your browser's developer console (if using Chrome on Microsoft Windows, press Ctrl + Shift + I. On Mac OS X you can press cmd + alt + I). As Bootstrap requires another third-party library, Tether for displaying popovers and tooltips, the developer console will display an error (Figure 1.6). Take a look at the following screenshot: Figure 1.6: Chrome's Developer Tools can be opened by going to View, selecting Developer and then clicking on Developer Tools. At the bottom of the page, a new view will appear. Under the Console tab, an error will indicate an unmet dependency. Tether is available via the CloudFare CDN, and consists of both a CSS file and a JavaScript file. As before, we should include the JavaScript file at the bottom of our document while we reference Tether's style sheet from inside our document head: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <meta http-equiv="x-ua-compatible" content="ie=edge"> <title>MyPhoto</title> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.4/css/bootstrap.min.css"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/tether/1.3.1/css/tether.min.css"> </head> <body> <div class="alert alert-success"> Hello World! </div> <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.4/jquery.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/tether/1.3.1/js/tether.min.js"></script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.4/js/bootstrap.min.js"></script> </body> </html> While CDNs are an important resource, there are several reasons why, at times, using a third party CDN may not be desirable: CDNs introduce an additional point of failure, as you now rely on third-party servers. The privacy and security of users may be compromised, as there is no guarantee that the CDN provider does not inject malicious code into the libraries that are being hosted. Nor can one be certain that the CDN does not attempt to track its users. Certain CDNs may be blocked by the Internet Service Providers of users in different geographical locations. Offline development will not be possible when relying on a remote CDN. You will not be able to optimize the files hosted by your CDN. This loss of control may affect your website's performance (although typically you are more often than not offered an optimized version of the library through the CDN). Instead of relying on a CDN, we could manually download the jQuery, Tether, and Bootstrap project files. We could then copy these builds into our project root and link them to the distribution files. The disadvantage of this approach is the fact that maintaining a manual collection of dependencies can quickly become very cumbersome, and next to impossible as your website grows in size and complexity. As such, we will not manually download the Bootstrap build. Instead, we will let Bower do it for us. Bower is a package management system, that is, a tool that you can use to manage your website's dependencies. It automatically downloads, organizes, and (upon command) updates your website's dependencies. To install Bower, head over to http://bower.io/. How do I install Bower? Before you can install Bower, you will need two other tools: Node.js and Git. The latter is a version control tool—in essence, it allows you to manage different versions of your software. To install Git, head over to http://git-scm.com/and select the installer appropriate for your operating system. NodeJS is a JavaScript runtime environment needed for Bower to run. To install it, simply download the installer from the official NodeJS website: https://nodejs.org/ Once you have successfully installed Git and NodeJS, you are ready to install Bower. Simply type the following command into your terminal: npm install -g bower This will install Bower for you, using the JavaScript package manager npm, which happens to be used by, and is installed with, NodeJS. Once Bower has been installed, open up your terminal, navigate to the project root folder you created earlier, and fetch the bootstrap build: bower install bootstrap#v4.0.0-alpha.4 This will create a new folder structure in our project root: bower_components bootstrap Gruntfile.js LICENSE README.md bower.json dist fonts grunt js less package.js package.json We will explain all of these various files and directories later on in this book. For now, you can safely ignore everything except for the dist directory inside bower_components/bootstrap/. Go ahead and open the dist directory. You should see three sub directories: css fonts js The name dist stands for distribution. Typically, the distribution directory contains the production-ready code that users can deploy. As its name implies, the css directory inside dist includes the ready-for-use style sheets. Likewise, the js directory contains the JavaScript files that compose Bootstrap. Lastly, the fonts directory holds the font assets that come with Bootstrap. To reference the local Bootstrap CSS file in our index.html, modify the href attribute of the link tag that points to the bootstrap.min.css: <link rel="stylesheet" href="bower_components/bootstrap/dist/css/bootstrap.min.css"> Let's do the same for the Bootstrap JavaScript file: <script src="bower_components/bootstrap/dist/js/bootstrap.min.js"></script> Repeat this process for both jQuery and Tether. To install jQuery using Bower, use the following command: bower install jquery Just as before, a new directory will be created inside the bower_components directory: bower_components jquery AUTHORS.txt LICENSE.txt bower.json dist sizzle src Again, we are only interested in the contents of the dist directory, which, among other files, will contain the compressed jQuery build jquery.min.js. Reference this file by modifying the src attribute of the script tag that currently points to Google's jquery.min.js by replacing the URL with the path to our local copy of jQuery: <script src="bower_components/jquery/dist/jquery.min.js"></script> Last but not least, repeat the steps already outlined above for Tether: bower install tether Once the installation completes, a similar folder structure than the ones for Bootstrap and jQuery will have been created. Verify the contents of bower_components/tether/dist and replace the CDN Tether references in our document with their local equivalent. The final index.html should now look as follows: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <meta http-equiv="x-ua-compatible" content="ie=edge"> <title>MyPhoto</title> <link rel="stylesheet" href="bower_components/bootstrap/dist/css/bootstrap.min.css"> <link rel="stylesheet" href="bower_components/tether/dist/css/tether.min.css"> </head> <body> <div class="alert alert-success"> Hello World! </div> <script src="bower_components/jquery/dist/jquery.min.js"></script> <script src="bower_components/tether/dist/js/tether.min.js"></script> <script src="bower_components/bootstrap/dist/js/bootstrap.min.js"></script> </body> </html> Refresh the index.html in your browser to make sure that everything works. What IDE and browser should I be using when following the examples in this book? While we recommend a JetBrains IDE or Sublime Text along with Google Chrome, you are free to use whatever tools and browser you like. Our taste in IDE and browser is subjective on this matter. However, keep in mind that Bootstrap 4 does not support Internet Explorer 8 or below. As such, if you do happen to use Internet Explorer 8, you should upgrade it to the latest version. Summary Aside from introducing you to our sample project MyPhoto, this article was concerned with outlining Bootstrap 4, highlighting its features, and discussing how this new version of Bootstrap differs to the last major release (Bootstrap 3). The article provided an overview of how Bootstrap can assist developers in the layout, structuring, and styling of pages. Furthermore, we noted how Bootstrap provides access to the most important and widely used user interface controls through the form of components that can be integrated into a page with minimal effort. By providing an outline of Bootstrap, we hope that the framework's intrinsic value in assisting in the development of modern websites has become apparent to the reader. Furthermore, during the course of the wider discussion, we highlighted and explained some important concepts in web development, such as typographic units of measurement or the definition, purpose and justification of the use of Content Delivery Networks. Last but not least, we detailed how to include Bootstrap and its dependencies inside an HTML document.
Read more
  • 0
  • 0
  • 30382

article-image-jailbreaking-ipad-ubuntu
Packt
20 Jul 2010
3 min read
Save for later

Jailbreaking the iPad - in Ubuntu

Packt
20 Jul 2010
3 min read
(For more resources on Ubuntu, see here.) What is jailbreaking? Jailbreaking an iPhone or iPad allows you to run unsigned code by unlocking the root account on the device. Simply, this allows you to install any software you like - without the restriction of having to be in the main Apple app store. Remember, jailbreaking is not SIM unlocking. Jailbreaking voids the Apple-supplied warranty. What does this mean for developers? The mass availability of jailbreaking for these devices allows developers to write apps without having to shell out Apple's developer fees. Previously a one-off payment of $300 US, an "official" developer must now pay $100 US each year to keep the right to develop applications. What jailbreaks are available? Arguably the most advanced jailbreak available now is called Spirit. Though unlike a few others, which can now hack iOS 4.0, Spirit differs in a few key features. Not only is Spirit the first to be able to jailbreak the iPad, but this jailbreak also allows an "untethered" jailbreak - you won't have to plug it into a computer every boot to "keep" it jailbroken. Support for jailbreaking iOS 4.0 is coming soon for Spirit. There are tutorials on jailbreaking using Spirit, like this one, but they generally skip over the fact that there's a Linux version, and only talk about Windows and/or OS X. Jailbreaking the iPad A very simple process, you can now jailbreak the iPad very quickly thanks to excellent device support and drivers in Ubuntu. Please note that from now on, you should only plug in the device to iTunes 9 before 9.2, or better still, just use Rhythmbox or gtkpod to manage your library. Install git if you haven't already got it: sudo apt-get install git Clone the Spirit repository: git clone http://github.com/posixninja/spirit-linux.git Install the dev package for libimobiledevice: sudo apt-get install libimobiledevice-dev Enter the Spirit directory and build the program: cd spirit-linux make I've noticed that though Ubuntu has excellent Apple device support, and you can mount these devices just fine, that the jailbreak program won't detect the device without iFuse. Install this first: sudo apt-get install ifuse Now for the fun! Plug in your iPad (you'll see it under the Places menu) and run the jailbreak: ./spirit You'll see output similar to this: INFO: Retriving device listINFO: Opening deviceINFO: Creating lockdownd clientINFO: Starting AFC serviceINFO: Sending files via AFC.INFO: Found version iPad1,1_3.2INFO: Read igor/map.plistINFO: Sending "install"INFO: Sending "one.dylib"INFO: Sending "freeze.tar.xz"INFO: Sending "bg.jpg"INFO: Sending files completeINFO: Creating lockdownd clientINFO: Starting MobileBackup serviceINFO: Beginning restore processINFO: Read resources/overrides.plistDEBUG: add_fileDEBUG: Data size 922:DEBUG: add_fileDEBUG: Data size 0:DEBUG: start_restoreDEBUG: Sending fileDEBUG: Sending fileINFO: Completed restoreINFO: Completed successfully The device will reboot, and if all went well, you'll see a new app called Cydia on the home screen. This is the app that allows you to install other apps. Open Cydia. Cydia will ask you to choose what kind of user you are. There's no harm in choosing Developer; you'll just see more information. Also, if you choose the bottom level (User) console packages like OpenSSH will be hidden from you. You'll also receive some updates; install them. Interestingly, Cydia uses the deb package format, just like Ubuntu: That's it! Wasn't that quick?
Read more
  • 0
  • 0
  • 30306

article-image-building-two-way-interactive-chatbot-twilio
Gebin George
04 May 2018
4 min read
Save for later

Building a two-way interactive chatbot with Twilio: A step-by-step guide

Gebin George
04 May 2018
4 min read
To build a chatbot that can communicate both ways we need to do two things: build the chatbot into the web app and modify setup configurations in Twilio. To do these, follow these steps: Create an index.js file in the root directory of the project. Install the express and body-parser libraries. These libraries will be used to make a web app: npm install body-parser --save npm install express --save Create a web app in index.js: // Two-way SMS Bot const express = require('express') const bodyParser = require('body-parser') const twilio = require('twilio') const app = express() app.set('port', (process.env.PORT || 5000)) Chapter 5 [ 185 ] // Process application/x-www-form-urlencoded app.use(bodyParser.urlencoded({extended: false})) // Process application/json app.use(bodyParser.json()) // Spin up the server app.listen(app.get('port'), function() { console.log('running on port', app.get('port')) }) // Index route app.get('/', function (req, res) { res.send('Hello world, I am SMS bot.') }) //Twilio webhook app.post('/sms/', function (req, res) { var botSays = 'You said: ' + req.body.Body; var twiml = new twilio.TwimlResponse(); twiml.message(botSays); res.writeHead(200, {'Content-Type': 'text/xml'}); res.end(twiml.toString()); }) The preceding code creates a web app that looks for incoming messages from users and responds to them. The response is currently to repeat what the user has said. Push it onto the cloud: git add . git commit -m webapp git push heroku master Now we have a web app on the cloud at https://ms-notification-bot.herokuapp.com/sms/ that can be called when an incoming SMS message arrives. This app will generate an appropriate chatbot response to the incoming message. Go to the Twilio Programmable SMS Dashboard page at https://www.twilio.com/ console/sms/dashboard. Select Messaging Services on the menu and click Create new Messaging Service: Give it a name and select Chat Bot/Interactive 2-Way as the use case: This will take you to the Configure page with a newly-assigned service ID: Under Inbound Settings, specify the URL of the web app we have created in the REQUEST URL field (that is, https://sms-notification-bot.herokuapp.com/sms/): Now all the inbound messages will be routed to this web app. Go back to the SMS console page at https://www.twilio com/console/sms/services. Here you will notice your new messaging service listed along with the inbound request URL: Click the service to attach a number to the service: You can either add a new number, in which case you need to buy one or choose the number you already have. We already have one sending notifications that can be reused. Click Add an Existing Number. Select the number by checking the box on the right and click Add Selected: Once added, it will be listed on the Numbers page as follows: In Advanced settings, we can add multiple numbers for serving different geographic regions and have them respond as if the chatbot is responding over a local number. The final step is to try sending an SMS message to the number and receive a response. Send a message using any SMS app on your phone and observe the response: Congratulations! You now have a two-way interactive chatbot. This tutorial is an excerpt from the book, Hands-On Chatbots and Conversational UI Development written by  Srini Janarthanam. If you found our post useful, do check out this book to get real-world examples of voice-enabled UIs for personal and home assistance. How to build a basic server side chatbot using Go Build a generative chatbot using recurrent neural networks (LSTM RNNs) Top 4 chatbot development frameworks for developers    
Read more
  • 0
  • 0
  • 30269

article-image-blogs-and-forums-using-plone-3
Packt
28 May 2010
11 min read
Save for later

Blogs and Forums using Plone 3

Packt
28 May 2010
11 min read
Blogs and forums have much to offer in a school setting. They help faculty and students communicate despite large class sizes. They engage students in conversations with each other. And they provide an easy way for instructors and staff members to build their personal reputations—and, thereby, the reputation of your institution. In this article, we consider how best to build blogs and forums in Plone. Along the way, we cite education-domain examples and point out tips for keeping your site stable and your users smiling. Plone's blogging potential Though Plone wasn't conceived as a blogging platform, its role as a full-fledged content management system gives it all the functionality of a blog and more. With a few well-placed tweaks, it can present an interface that puts users of other blogging packages right at home while letting you easily maintain ties between your blogs and the rest of your site. Generally speaking, blog entries are… Prominently labeled by date and organized in reverse chronological order Tagged by subject Followed by reader comments Syndicated using RSS or other protocols Plone provides all of these, with varying degrees of polish, out of the box: News items make good blog entries, and the built-in News portlet lists the most recent few, in reverse chronological order and with publication dates prominently shown. A more comprehensive, paginated list can easily be made using collections. Categories are a basic implementation of tags. Plone's built-in commenting can work on any content type, News Items included. Every collection has its own RSS feed. Add-on products: free as in puppies In addition to Plone's built-in tools, this article will explore the capabilities of several third-party add-ons. Open-source software is often called "free as in beer" or "free as in freedom". As typical of Plone add-ons, the products we will consider are both. However, they are also "free as in puppies". Who can resist puppies? They are heart-meltingly cute and loads of fun, but it's easy to forget, when their wet little noses are in your face, that they come with responsibility. Likewise, add-ons are free to install and use, but they also bring hidden costs: Products can hold you back. If you depend on one that doesn't support a new version of Plone, you'll face a choice between the product and the Plone upgrade. This situation is most likely at major version boundaries: for example, upgrading from Plone 3.x to Plone 4. Minor upgrades, as from Plone 3.2 to 3.3, should be fairly uneventful. (This was not always true with Plone 2.x, but release numbering has since gotten a dose of sanity.) One place products often fall short is uninstallation. It takes care to craft a quality uninstallation routine; low-quality or prerelease products sometimes fail to uninstall cleanly, leaving bits of themselves scattered throughout your site. They can even prevent your site from displaying any pages at all (often due to leaving remnants in portal_actions), and you may have to repair things by hand through the ZMI or, failing that, through an afternoon of fun with the Python debugger. The moral: even trying a product can be a risk. Test installation and uninstallation on a copy of your site before committing to one, and back up your Data.fs file before installing or uninstalling on production servers. Pace of work varies widely. Reporting a bug against an actively developed product might get you a new release within the week. Hitting a bug in an abandoned one could leave you fixing it yourself or paying someone else to. (Fortunately, there are scads of Plone consultants for hire in the #plone IRC channel and on the plone-users mailing list.) In addition to the above, products that add new content types (like blog entries, for instance) bring a risk of lock-in proportional to the amount of content you create with them. If a product is abandoned by its maintainer or you decide to stop using it for some other reason, you will need to migrate its content into some other type, either by writing custom scripts or by copying and pasting. These considerations are major drivers of this article's recommendations. For each of the top three Plone blogging strategies, we'll outline its capabilities, tick off its pros and cons, and estimate how high-maintenance a puppy it will be. Remember, even though puppies can be some work, a well-chosen and well-trained one becomes a best friend for life. News Items: blogging for the hurried or risk-averse Using news items as blog entries is, in true Extreme Programming style, "the simplest thing that could possibly work". Nonetheless, it's a surprisingly flexible practice and will disappoint only if you need features like pings, trackbacks, and remote editor integration. Here is an example front page of a Plone blog built using only news items, collections, and the built-in portlets: Structure of a news-item blog A blog in Plone can be as simple as a folder full of News Items, further organized into subfolders if necessary. Add a collection showing the most recent News Items to the top-level folder, and set it as its default page. As illustrated below, use an Item Type criterion for the collection to pull in the News Items, and use a Location criterion to exclude those created outside the blog folder: To provide pagination—recommended once the length of listings starts to noticeably impact download or render timetime—use the Limit Search Results option on the collection. One inconsistency is that only the Summary and Tabular Views on collections support pagination; Standard View (which shows the same information) does not. This means that Summary View, which sports a Read more link and is a bit more familiar to most blog users, is typically a good choice. Go easy on the pagination More items displayed per page is better. User tests on prototypes of gap.com's online store have suggested that, at least when selling shirts, more get sold when all are on one big page. Perhaps it's because users are faced with a louder mental "Continue or leave?" when they reach the end of a page. Regardless, it's something to consider when setting page size using a collection's Number of Items setting; you may want to try several different numbers and see how it affects the frequency with which your listing pages show up as "exit pages" in a web analytics package like AWStats. As a starting point, 50 is a sane choice, assuming your listings show only the title and description of each entry (as the built-in views do). The ideal number will be a trade-off between tempting visitors to leave with page breaks and keeping load and render times tolerable.   Finally, make sure to sort the entries by publication date. Set this up on the front-page collection's Criteria tab by selecting Effective Date and reversing the display order: As with all solutions in this article, a blog built on raw News Items can easily handle either single- or multi-author scenarios; just assign rights appropriately on the Sharing tab of the blog folder. News Item pros and cons Unadorned News Items are a great way to get started fast and confer practically zero upgrade risk, since they are maintained as part of Plone itself. However, be aware of these pointy edges you might bang into when using them as blog entries: With the built-in views, logged-out users can't see the authors or the publication dates of entries. Even logged-in users see only the modification dates unless they go digging through the workflow history. Categories applied to a News Item appear on its page, but clicking them takes you to a search for all items (both blog-related and otherwise) having that category. This could be a bug or a feature, depending on your situation. However, the ordering of the search results is unpredictable, and that is definitely unhelpful. The great thing about plain News Items is that there's a forward migration path. QuillsEnabled, which we'll explore later, can be layered atop an existing news-item-based blog with no migrations necessary and removed again if you decide to go back. Thus, a good strategy may be to start simple, with plain news items, and go after more features (and risk) as the need presents itself. Scrawl: a blog with a view One step up from plain News Items is Scrawl, a minimalist blog product that adds only two things: A custom Blog Entry type, which is actually just a copy of News Item. A purpose-built Blog view that can be applied to folders or collections, which are otherwise used just as with raw News Items. Here are both additions in action: Scrawl's Blog Entry isn't quite a verbatim copy of News Item; Scrawl makes a few tweaks: Commenting is turned on for new Blog Entries, without which authors would have to enable it manually each time. The chances of that happening are slim, since it's buried on the Edit → Settings tab, and users seldom stray from the default tab when editing. Blog Entry's default view is a slightly modified version of News Item's: it shows the author's name and the posting date even to unauthenticated users—and in a friendly "Posted by Fred Finster" format. It also adds a Permalink link, lest you forfeit crosslinks from users who know no other way of finding an entry's address. Calm your ringing phone by cloning types Using a custom content type for blog entries—even if it's just a copy of an existing one—has considerable advantages. For one, you can match contributors' vocabulary: assuming contributors think of part of your site as a blog (which they probably will if the word "blog" appears anywhere onscreen), they won't find it obvious to add "news items" there. Adding a "blog entry," on the other hand, lines up naturally with their expectations. This little trick, combined with judicious use of the Add new… → Restrictions… feature to pare down their options, will save hours of your time in training and support calls. A second advantage of a custom type is that it shows separately in Plone's advanced search. Visitors, like contributors, will identify better with the "blog entry" nomenclature. Plus, sometimes it's just plain handy to limit searches to only blogs. This type-cloning technique isn't limited to blog entries; you can clone and rename any content type: just visit portal_types in the ZMI, copy and paste a type, rename it, and edit its Title and Description fields. One commonly cloned type is File. Many contributors, even experts in noncomputer domains, aren't familiar with the word file. Cloning it to create PDF File, Word Document, and so on can go a long way toward making them comfortable using Plone.   Pros and cons of scrawl Scrawl's biggest risk is lock-in: since it uses its own Blog Entry content type to store your entries, uninstalling it leaves them inaccessible. However, because the Blog Entry type is really just the News Item type, a migration script is easy to write: # Turn all Blog Entries in a Plone site into News Items. # # Run by adding a "Script (Python)" in the ZMI (it doesn't matter where) and pasting this in. from Products.CMFCore.utils import getToolByName portal_catalog = getToolByName(context, 'portal_catalog') for brain in portal_catalog(portal_type='Blog Entry'): blog_entry = brain.getObject() # Get the actual blog entry from # the catalog entry. blog_entry.portal_type = 'News Item' # Update the catalog so searches see the new info: blog_entry.reindexObject() The reverse is also true: if you begin by using raw News Items and decide to switch to Scrawl, you'll need the reverse of the above script—just swap 'News Item' and 'Blog Entry'. If you have news items that shouldn't be converted to blog entries, your catalog query will have to be more specific, perhaps adding a path keyword argument, as in portal_catalog(portal_type='News Item', path='/my-plonesite/ blog-folder'). Aside from that, Scrawl is pretty risk-free. Its simplicity makes it unlikely to accumulate showstopping bugs or to break in future versions of Plone, and, if it does, you can always migrate back to news items or, if you have some programming skill, maintain it yourself—it's only 1,000 lines of code.
Read more
  • 0
  • 0
  • 30256
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-unit-testing-in-net-core-with-visual-studio-2017-for-better-code-quality
Kunal Chaudhari
07 May 2018
12 min read
Save for later

Unit Testing in .NET Core with Visual Studio 2017 for better code quality

Kunal Chaudhari
07 May 2018
12 min read
The famous Java programmer, Bruce Eckel, came up with a slogan which highlights the importance of testing software: If it ain't tested, it's broken. Though a confident programmer may challenge this, it beautifully highlights the ability to determine that code works as expected over and over again, without any exception. How do we know that the code we are shipping to the end user or customer is of good quality and all the user requirements would work? By testing? Yes, by testing, we can be confident that the software works as per customer requirements and specifications. If there are any discrepancies between expected and actual behavior, it is referred to as a bug/defect in the software. The earlier the discrepancies are caught, the more easily they can be fixed before the software is shipped, and the results are good quality. No wonder software testers are also referred to as quality control analysts in various software firms. The mantra for a good software tester is: In God we trust, the rest we test. In this article, we will understand the testing deployment model of .NET Core applications, the Live Unit Testing feature of Visual Studio 2017. We will look at types of testing methods briefly and write our unit tests, which a software developer must write after writing any program. Software testing is conducted at various levels: Unit testing: While coding, the developer conducts tests on a unit of a program to validate that the code they have written is error-free. We will write a few unit tests shortly. Integration testing: In a team where a number of developers are working, there may be different components that the developers are working on. Even if all developers perform unit testing and ensure that their units are working fine, there is still a need to ensure that, upon integration of these components, they work without any error. This is achieved through integration testing. System testing: The entire software product is tested as a whole. This is accomplished using one or more of the following: Functionality testing: Test all the functionality of the software against the business requirement document. Performance testing: To test how performant the software is. It tests the average time, resource utilization, and so on, taken by the software to complete a desired business use case. This is done by means of load testing and stress testing, where the software is put under high user and data load. Security testing: Tests how secure the software is against common and well-known security threats. Accessibility testing: Tests if the user interface is accessible and user-friendly to specially-abled people or not. User acceptance testing: When the software is ready to be handed over to the customer, it goes through a round of testing by the customer for user interaction and response. Regression testing: Whenever a piece of code is added/updated in the software to add a new functionality or fix an existing functionality, it is tested to detect if there are any side-effects from the newly added/updated code. Of all these different types of testing, we will focus on unit testing, as it is done by the developer while coding the functionality. Unit testing .NET Core has been designed with testability in mind. .NET Core 2.0 has unit test project templates for VB, F#, and C#. We can also pick the testing framework of our choice amongst xUnit, NUnit, and MSTest. Unit tests that test single programming parts are the most minimal-level tests. Unit tests should just test code inside the developer's control, and ought to not test infrastructure concerns, for example, databases, filesystems, or network resources. Unit tests might be composed utilizing test-driven development (TDD) or they can be added to existing code to affirm its accuracy. The naming convention of Test class names should end with Test and reside in the same namespace as the class being tested. For instance, the unit tests for the Microsoft.Example.AspNetCore class would be in the Microsoft.Example.AspNetCoreTest class in the test assembly. Also, unit test method names must be descriptive about what is being tested, under what conditions, and what the expectations are. A good unit test has three main parts to it in the following specified order: Arrange Act Assert We first arrange the code and then act on it and then do a series of asserts to check if the actual output matches the expected output. Let's have a look at them in detail: Arrange: All the parameter building, and method invocations needed for making a call in the act section must be declared in the arrange section. Act: The act stage should be one statement and as simple as possible. This one statement should be a call to the method that we are trying to test. Assert: The only reason method invocation may fail is if the method itself throws an exception, else, there should always be some state change or output from any meaningful method invocation. When we write the act statement, we anticipate an output and then do assertions if the actual output is the same as expected. If the method under test should throw an exception under normal circumstances, we can do assertions on the type of exception and the error message that should be thrown. We should be watchful while writing unit test cases, that we don't inject any dependencies on the infrastructure. Infrastructure dependencies should be taken care of in integration test cases, not in unit tests. We can maintain a strategic distance from these shrouded dependencies in our application code by following the Explicit Dependencies Principle and utilizing Dependency Injection to request our dependencies on the framework. We can likewise keep our unit tests in a different project from our integration tests and guarantee our unit test project doesn't have references to the framework. Testing using xUnit In this section, we will learn to write unit and integration tests for our controllers. There are a number of options available to us for choosing the test framework. We will use xUnit for all our unit tests and Moq for mocking objects. Let's create an xUnit test project by doing the following: Open the Let's Chat project in Visual Studio 2017 Create a new folder named  Test Right-click the Test folder and click Add | New Project Select xUnit Test Project (.NET Core) under Visual C# project templates, as shown here: Delete the default test class that gets created with the template Create a test class inside this project AuthenticationControllerUnitTests for the unit test We need to add some NuGet packages. Right-click the project in VS 2017 to edit the project file and add the references manually, or use the NuGet Package Manager to add these packages: // This package contains dependencies to ASP.NET Core <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.0" /> // This package is useful for the integration testing, to build a test host for the project to test. <PackageReference Include="Microsoft.AspNetCore.TestHost" Version="2.0.0" /> // Moq is used to create fake objects <PackageReference Include="Moq" Version="4.7.63" /> With this, we are now ready to write our unit tests. Let's start doing this, but before we do that, here's some quick theory about xUnit and Moq. The documentation from the xUnit website and Wikipedia tells us that xUnit.net is a free, open source, community-focused unit testing tool for the .NET Framework. It is the latest technology for unit testing C#, F#, Visual Basic .NET, and other .NET languages. All xUnit frameworks share the following basic component architecture: Test runner: It is an executable program that runs tests implemented using an xUnit framework and reports the test results. Test case: It is the most elementary class. All unit tests are inherited from here. Test fixtures: Test fixures (also known as a test context) are the set of preconditions or state needed to run a test. The developer should set up a known good state before the tests, and return to the original state after the tests. Test suites: It is a set of tests that all share the same fixture. The order of the tests shouldn't matter. xUnit.net includes support for two different major types of unit test: Facts: Tests which are always true. They test invariant conditions, that is, data-independent tests. Theories: Tests which are only true for a particular set of data. Moq is a mocking framework for C#/.NET. It is used in unit testing to isolate the class under test from its dependencies and ensure that the proper methods on the dependent objects are being called. Recall that in unit tests, we only test a unit or a layer/part of the software in isolation and hence do not bother about external dependencies, so we assume they work fine and just mock them using the mocking framework of our choice. Let's put this theory into action by writing a unit test for the following action in AuthenticationController: public class AuthenticationController : Controller { private readonly ILogger<AuthenticationController> logger; public AuthenticationController(ILogger<AuthenticationController> logger) { this.logger = logger; } [Route("signin")] public IActionResult SignIn() { logger.LogInformation($"Calling {nameof(this.SignIn)}"); return Challenge(new AuthenticationProperties { RedirectUri = "/" }); } } The unit test code depends on how the method to be tested is written. To understand this, let's write a unit test for a SignIn action. To test the SignIn method, we need to invoke the SignIn action in AuthenticationController. To do so, we need an instance of the AuthenticationController class, on which the SignIn action can be invoked. To create the instance of AuthenticationController, we need a logger object, as the AuthenticationController constructor expects it as a parameter. Since we are only testing the SignIn action, we do not bother about the logger and so we can mock it. Let's do it: /// <summary> /// Authentication Controller Unit Test - Notice the naming convention {ControllerName}Test /// </summary> public class AuthenticationControllerTest { /// <summary> /// Mock the dependency needed to initialize the controller. /// </summary> private Mock<ILogger<AuthenticationController>> mockedLogger = new Mock<ILogger<AuthenticationController>>(); /// <summary> /// Tests the SignIn action. /// </summary> [Fact] public void SignIn_Pass_Test() { // Arrange - Initialize the controller. Notice the mocked logger object passed as the parameter. var controller = new AuthenticationController(mockedLogger.Object); // Act - Invoke the method to be tested. var actionResult = controller.SignIn(); // Assert - Make assertions if actual output is same as expected output. Assert.NotNull(actionResult); Assert.IsType<ChallengeResult>(actionResult); Assert.Equal(((ChallengeResult)actionResult). Properties.Items.Count, 1); } } Reading the comments would explain the unit test code. The previous example shows how easy it is to write a unit test. Agreed, depending on the method to be tested, things can get complicated. But it is likely to be around mocking the objects and, with some experience on the mocking framework and binging around, mocking should not be a difficult task. The unit test for the SignOut action would be a bit complicated in terms of mocking as it uses HttpContext. The unit test for the SignOut action is left to the reader as an exercise. Let's explore a new feature introduced in Visual Studio 2017 called Live Unit Testing. Live Unit Testing It may disappoint you but Live Unit Testing (LUT) is available only in the Visual Studio 2017 Enterprise edition and not in the Community edition. What is Live Unit Testing? It's a new productivity feature, introduced in the Visual Studio 2017 Enterprise edition, that provides real-time feedback directly in the Visual Studio editor on how code changes are impacting unit tests and code coverage. All this happens live, while you write the code and hence it is called Live Unit Testing. This will help in maintaining the quality by keeping tests passing as changes are made. It will also remind us when we need to write additional unit tests, as we are making bug fixes or adding features. To start Live Unit Testing: Go to the Test menu item Click Live Unit Testing Click Start, as shown here: On clicking this, your CPU usage may go higher as Visual Studio spawns the MSBuild and tests runner processes in the background. In a short while, the editor will display the code coverage of the individual lines of code that are covered by the unit test. The following image displays the lines of code in AuthenticationController that are covered by the unit test. On clicking the right icon, it displays the tests covering this line of code and also provides the option to run and debug the test: Similarly, if we open the test file, it will show the indicator there as well. Super cool, right! If we navigate to Test|Live Unit Testing now, we would see the options to Stop and Pause. So, in case we wish to save  our resources after getting the data once, we can pause or stop Live Unit Testing: There are numerous icons which indicates the code coverage status of individual lines of code. These are: Red cross: Indicates that the line is covered by at least one failing test Green check mark: Indicates that the line is covered by only passing tests Blue dash: Indicates that the line is not covered by any test If you see a clock-like icon just below any of these icons, it indicates that the data is not up to date. With this productivity-enhancing feature, we conclude our discussion on basic unit testing. Next, we will learn about containers and how we can do the deployment and testing of our .NET Core 2.0 applications in containers. To summarize, we learned the importance of testing and how we can write unit tests using Moq and xUnit. We saw a new productivity-enhancing feature introduced in Visual Studio 2017 Enterprise edition called Live Unit Testing and how it helps us write better-quality code. You read an excerpt from a book written by Rishabh Verma and Neha Shrivastava, titled  .NET Core 2.0 By Example. This book will help you build cross-platform solutions with .NET Core 2.0 through real-life scenarios.   More on Testing: Unit Testing and End-To-End Testing Testing RESTful Web Services with Postman Selenium and data-driven testing: Interview insights
Read more
  • 0
  • 0
  • 30221

article-image-5-things-consider-developing-ecommerce-website
Johti Vashisht
11 Apr 2018
7 min read
Save for later

5 things to consider when developing an eCommerce website

Johti Vashisht
11 Apr 2018
7 min read
Online businesses are booming and rightly so – this year it is expected that 18% of all UK retail purchases will occur online this year. That's partly because eCommerce website development has got easy - almost anyone can do it. But hubris might be your downfall; there are a number of important things to consider before you start to building your eCommerce website. This is especially true if you want customers to keep coming back to your site. We’ve compiled a list of things to keep in mind for when you are ready to build an eCommerce store. eCommerce website development begins with the right platform and brilliant Design Platform Before creating your eCommerce website, you need to decide which platform to create the website on. There are a variety of content management systems including WordPress, Joomla and Magento. Wordpress is a versatile and easy to use platform which also supports a large number of plugins so it may be suitable if you are offering services or only a few products. Platforms such as Magento have been created specifically for eCommerce use. If you are thinking of opening up an online store with many products then Magento is the best option as it is easier to manage your products. Design When designing your website, use a clean, simple design rather than one with too many graphics and incorporate clear call to actions. Another thing to take into account is whether you want to create your own custom theme or choose a preselected theme and build upon it. Although it can be pricier, a custom theme allows you to add custom functionality to your website that a standard pre-made theme may not have. In contrast, pre-made themes will be much cheaper or in most cases free. If you are choosing a pre-made theme, then be sure to check that it is regularly updated and that they have support contact details in case of any queries. Your website design should also be responsive so that your website can be viewed correctly across multiple platforms and operating systems. Your eCommerce website needs to be secure A secure website is beneficial for both you and your customers. With a growing number of websites being hacked and data being stolen, security is the one part of a website you cannot skip out on. An SSL (Secure Sockets Layer) certificate is essential to get for your website, as not only does it allow for a secure connection over which personal data can be transmitted, it also provides authentication so that customers know it’s safe to make purchases on your website.  SSL certificates are mandatory if you collect private information from customers via forms. HTTPS – (Hyper Text Transfer Protocol Secure) is an encrypted standard for website client communications. In order to for HTTP to become HTTPS, data is wrapped into secure SSL packets before being sent and after receiving the data. As well as securing data, HTTPS may also be used for search ranking purposes. If you utilise HTTPS, you will have a slight boost in ranking over competitor websites that do not utilise HTTPS. eCommerce plugins make adding features to your site easier If you have decided to use Wordpress to create your eCommerce website then there are a number of eCommerce plugins available to help you create your online store. Top eCommerce plugins include WooCommerce, Shopify, Shopp and Easy Digital Downloads. SEO attracts organic traffic to your eCommerce site If you want potential customers to see your products before that of competitors then optimising your website pages will aid in trying to be on the first page of search results. Undertake a keyword research to get the words that potential customers are most commonly using to find the products you offer. Google’s keyword planner is quite helpful in managing your keyword research. You can then add relevant words to your product names and descriptions. Revisit these keywords occasionally to update them and experiment with which keywords work better. You can improve your rankings with good page titles that include relevant keywords. Although meta descriptions do not improve ranking, it’s good to add useful meta descriptions as a better description may draw more clicks. Also ensure that the product URLs mirror what the product is and isn’t unnecessarily long. Other things to consider when building an eCommerce website You may wish to consider additional features in order to increase your chance of returning visitors: Site speed If your website is slow then it’s likely that customers may not return for a repeat purchase if it takes too long for a product to load. They’ll simply visit a competitor website that loads much faster. There are a few things you can do to speed up your website including caching and using in memory technology for certain things rather than constantly accessing the database. You could also use fast hosting servers to meet traffic requirements. Site speed is also an important SEO consideration. Guest checkout 23% of shoppers will abandon their shopping basket if they are forced to register an account. Make it easier for customers to purchase items with guest checkout. Some customers may not wish to create an account as they may be limited for time. Create a smooth, quick transaction process by adding the option of a guest checkout. Once they have completed their checkout, you can ask them if they would like to create an account. Site search Utilise search functionality to allow users to search for products with the ability to filter products through a variety of options (if applicable). Pain points Address potential concerns customers may have before purchasing your products by displaying information they may have concerns or queries about. This can include delivery options and whether free returns are offered. Mobile optimization In 2017 almost 59% of ecommerce sales occurred via mobile. There is an increasing number of users who now shop online using their smart phones and this trend will most likely grow. That’s why optimising your website for mobile is a must. User-generated reviews and testimonials Use social proof on your website with user reviews and testimonials. If a potential customer reads customer reviews then they are more likely to purchase a product. However, user-generated reviews can go both ways – a user may also post a negative review which may not be good for your website/online store. Related items Showing related items under a product is useful for customers who are looking for an item but may not have decided what type of that particular product they want. This is also useful for when the main product is out of stock. FAQs section Creating an FAQ section with common questions is very useful and saves both the customer and company time as basic queries can be answered by looking at the FAQ page. If you're starting out, good luck! Yes, in 2018 eCommerce website development is pretty easy thanks to the likes of Shopify, WooCommerce, and Magento among others. But as you can see, there’s plenty you need to consider. By incorporating most of these points, you will be able to create an ecommerce website that users will be able to navigate through easily and find the products or services they are looking for.
Read more
  • 0
  • 0
  • 30144

article-image-saying-hello
Packt
04 Oct 2016
6 min read
Save for later

Bootstrap and Angular: Saying Hello!

Packt
04 Oct 2016
6 min read
In this article by Sergey Akopkokhyants, author of the book Learning Web Development with Bootstrap and Angular (Second Edition), will establish a development environment for the simplest application possible. (For more resources related to this topic, see here.) Development environment setup It's time to set up your development environment. This process is one of the most overlooked and often frustrating parts of learning to program because developers don't want to think about it. The developers must know nuances how to install and configure many different programs before they start real development. Everyone's computers are different as a result; the same setup may not work on your computer. We will expose and eliminate all of these problems by defining the various pieces of environment you need to setup. Defining shell The shell is a required part of your software development environment. We will use the shell to install software, run commands to build and start the web server to bring the life to your web project. If your computer has installed Linux operating system then you will use the shell called Terminal. There are many Linux-based distributions out there that use diverse desktop environments, but most of them use the equivalent keyboard shortcut to open the Terminal. Use keyboard shortcut Ctrl + Alt + T to open Terminal in Linux. If you have a Mac computer with installed OS X, then you will use the Terminal shell as well. Use keyboard shortcut Command + Space to open the Spotlight, type Terminal to search and run. If you have a computer with installed Windows operation system, you can use the standard command prompt, but we can do better. In a minute later I will show you how can you install the Git on your computer, and you will have Git Bash free. You can open a Terminal with Git Bash shell program on Windows. I will use the shell bash for all exercises in this book whenever I need to work in the Terminal. Installing Node.js The Node.js is technology we will use as a cross-platform runtime environment for running server-side Web applications. It is a combination of native, platform independent runtime based on Google's V8 JavaScript engine and a huge number of modules written in JavaScript. Node.js ships with different connectors and libraries help you use HTTP, TLS, compression, file system access, raw TCP and UDP, and more. You as a developer can write own modules on JavaScript and run them inside Node.js engine. The Node.js runtime makes ease build a network, event-driven application servers. The terms package and library are synonymous in JavaScript so that we will use them interchangeably. Node.js is utilizing JavaScript Object Notation (JSON) format widely in data exchange between server and client sides because it readily expressed in several parse diagrams, notably without complexities of XML, SOAP, and other data exchange formats. You can use Node.js for the development of the service-oriented applications, doing something different than web servers. One of the most popular service-oriented application is Node Package Manager (NPM) we will use to manage library dependencies, deployment systems, and underlies the many platform-as-a-service (PaaS) providers for Node.js. If you do not have Node.js installed on your computer, you shall download the pre-build installer from https://nodejs.org/en/download. You can start to use the Node.js immediately after installation. Open the Terminal and type: node ––version The Node.js must respond with version number of installed runtime: v4.4.3 Setting up NPM The NPM is a package manager for JavaScript. You can use it to find, share, and reuse packages of code from many developers across the world. The number of packages dramatically grows every day and now is more than 250K. NPM is a Node.js package manager and utilizes it to run itself. NPM is included in setup bundle of Node.js and available just after installation. Open the Terminal and type: npm ––version The NPM must answer on your command with version number: 2.15.1 The following command gives us information about Node.js and NPM install: npm config list There are two ways to install NPM packages: locally or globally. In cases when you would like to use the package as a tool better install it globally: npm install ––global <package_name> If you need to find the folder with globally installed packages you can use the next command: npm config get prefix Installation global packages are important, but best avoid if not needed. Mostly you will install packages locally. npm install <package_name> You may find locally installed packages in a node_modules folder of your project. Installing Git You missed a lot if you are not familiar with Git. Git is a distributed version control system and each Git working directory is a full-fledged repository. It keeps the complete history of changes and has full version tracking capabilities. Each repository is entirely independent of network access or a central server. You can install Git on your computer via a set of pre-build installers available on official website https://git-scm.com/downloads. After installation, you can open the Terminal and type git –version Git must respond with version number git version 2.8.1.windows.1 As I said for developers who use computers with installed Windows operation system now, you have Git Bash free on your system. Code editor You can imagine how many programs for code editing exists but we will talk today only about free, open source and runs everywhere Visual Studio Code from Microsoft. You can use any program you prefer for development, but I use only Visual Studio Code in our future exercises, so please install it from http://code.visualstudio.com/Download. Summary This article, we learned about shell concept, how to install Node.js and Git, and setting up node packages. Resources for Article: Further resources on this subject: Gearing Up for Bootstrap 4 [article] API with MongoDB and Node.js [article] Mapping Requirements for a Modular Web Shop App [article]
Read more
  • 0
  • 0
  • 29958

article-image-gain-practical-expertise-latest-edition-software-architecture-with-c-sharp9-dotnet5
Expert Network
08 Jul 2021
3 min read
Save for later

Gain Practical Expertise with the Latest Edition of Software Architecture with C# 9 and .NET 5 

Expert Network
08 Jul 2021
3 min read
Software architecture is one of the most discussed topics in the software industry today, and its importance will certainly grow more in the future. But the speed at which new features are added to these software solutions keeps increasing, and new architectural opportunities keep emerging. To strengthen your command on this, Packt brings to you the Second Edition of Software Architecture with C# 9 and .NET 5 by Gabriel Baptista and Francesco Abbruzzese – a fully revised and expanded guide, featuring the latest features of .NET 5 and C# 9.  This book covers the most common design patterns and frameworks involved in modern cloud-based and distributed software architectures. It discusses when and how to use each pattern, by providing you with practical real-world scenarios. This book also presents techniques and processes such as DevOps, microservices, Kubernetes, continuous integration, and cloud computing, so that you can have a best-in-class software solution developed and delivered for your customers.   This book will help you to understand the product that your customer wants from you. It will guide you to deliver and solve the biggest problems you can face during development. It also covers the do's and don'ts that you need to follow when you manage your application in a cloud-based environment. You will learn about different architectural approaches, such as layered architectures, service-oriented architecture, microservices, Single Page Applications, and cloud architecture, and understand how to apply them to specific business requirements.   Finally, you will deploy code in remote environments or on the cloud using Azure. All the concepts in this book will be explained with the help of real-world practical use cases where design principles make the difference when creating safe and robust applications. By the end of the book, you will be able to develop and deliver highly scalable and secure enterprise-ready applications that meet the end customers' business needs.   It is worth mentioning that Software Architecture with C# 9 and .NET 5, Second Edition will not only cover the best practices that a software architect should follow for developing C# and .NET Core solutions, but it will also discuss all the environments that we need to master in order to develop a software product according to the latest trends.   This second edition is improved in code, and adapted to the new opportunities offered by C# 9 and .Net 5. We added all new frameworks and technologies such as gRPC, and Blazor, and described Kubernetes in more detail in a dedicated chapter.   To get the most out of this book, understand it as a guidance that you may want to revisit many times for different circumstances. Do not forget to have Visual Studio Community 2019 or higher installed and be sure that you understand C# .NET principles.
Read more
  • 0
  • 0
  • 29869
article-image-wasmers-first-postgres-extension-to-run-webassembly-is-here
Vincy Davis
30 Aug 2019
2 min read
Save for later

Wasmer's first Postgres extension to run WebAssembly is here!

Vincy Davis
30 Aug 2019
2 min read
Wasmer, the WebAssembly runtime have been successfully embedded in many languages like Rust, Python, Ruby, PHP, and Go. Yesterday, Ivan Enderlin, a PhD Computer Scientist at Wasmer, announced a new Postgres extension version 0.1 for WebAssembly. Since the project is still under heavy development, the extension only supports integers (on 32- and 64-bits) and works on Postgres 10 only. It also does not support strings, records, views or any other Postgres types yet. https://twitter.com/WasmWeekly/status/1167330724787171334 The official post states, “The goal is to gather a community and to design a pragmatic API together, discover the expectations, how developers would use this new technology inside a database engine.” The Postgres extension provides two foreign data wrappers wasm.instances and wasm.exported_functions in the wasm foreign schema. The wasm.instances is a table with an id and wasm_file columns. The wasm.exported_functions is a table with the instance_id, name, inputs, and outputs columns. Enderlin says that these information are enough for the wasm Postgres extension “to generate the SQL function to call the WebAssembly exported functions.” Read Also: Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module The Wasmer team ran a basic benchmark of computing the Fibonacci sequence computation to compare the execution time between WebAssembly and PL/pgSQL. The benchmark was run on a MacBook Pro 15" from 2016, 2.9Ghz Core i7 with 16Gb of memory. Image Source: Wasmer The result was that, “Postgres WebAssembly extension is faster to run numeric computations. The WebAssembly approach scales pretty well compared to the PL/pgSQL approach, in this situation.” The Wasmer team believes that though it is too soon to consider WebAssembly as an alternative to PL/pgSQL, the above result makes them hopeful that it can be explored more. To know more about Postgres extension, check out its Github page. 5 barriers to learning and technology training for small software development teams Mozilla CEO Chris Beard to step down by the end of 2019 after five years in the role JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3
Read more
  • 0
  • 0
  • 29867

article-image-how-to-develop-restful-web-services-in-spring
Vijin Boricha
13 Apr 2018
6 min read
Save for later

How to develop RESTful web services in Spring

Vijin Boricha
13 Apr 2018
6 min read
Today, we will explore the basics of creating a project in Spring and how to leverage Spring Tool Suite for managing the project. To create a new project, we can use a Maven command prompt or an online tool, such as Spring Initializr (http://start.spring.io), to generate the project base. This website comes in handy for creating a simple Spring Boot-based web project to start the ball rolling. Creating a project base Let's go to http://start.spring.io in our browser and configure our project by filling in the following parameters to create a project base: Group: com.packtpub.restapp Artifact: ticket-management Search for dependencies: Web (full-stack web development with Tomcat and Spring MVC) After configuring our project, it will look as shown in the following screenshot: Now you can generate the project by clicking Generate Project. The project (ZIP file) should be downloaded to your system. Unzip the .zip file and you should see the files as shown in the following screenshot: Copy the entire folder (ticket-management) and keep it in your desired location. Working with your favorite IDE Now is the time to pick the IDE. Though there are many IDEs used for Spring Boot projects, I would recommend using Spring Tool Suite (STS), as it is open source and easy to manage projects with. In my case, I use sts-3.8.2.RELEASE. You can download the latest STS from this link: https://spring. io/tools/sts/ all. In most cases, you may not need to install; just unzip the file and start using it: After extracting the STS, you can start using the tool by running STS.exe (shown in the preceding screenshot). In STS, you can import the project by selecting Existing Maven Projects, shown as follows: After importing the project, you can see the project in Package Explorer, as shown in the following screenshot: You can see the main Java file (TicketManagementApplication) by default: To simplify the project, we will clean up the existing POM file and update the required dependencies. Add this file configuration to pom.xml: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.packtpub.restapp</groupId> <artifactId>ticket-management</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging> <name>ticket-management</name> <description>Demo project for Spring Boot</description> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> </properties> <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>5.0.1.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> <version>1.5.7.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-tomcat</artifactId> <version>1.5.7.RELEASE</version> </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> <version>2.9.2</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>5.0.0.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc</artifactId> <version>5.0.1.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> <version>1.5.7.RELEASE</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> In the preceding configuration, you can check that we have used the following libraries: spring-web spring-boot-starter spring-boot-starter-tomcat spring-bind jackson-databind As the preceding dependencies are needed for the project to run, we have added them to our pom.xml file. So far we have got the base project ready for Spring Web Service. Let's add a basic REST code to the application. First, remove the @SpringBootApplication annotation from the TicketManagementApplication class and add the following annotations: @Configuration @EnableAutoConfiguration @ComponentScan @Controller These annotations will help the class to act as a web service class. I am not going to talk much about what these configurations will do in this chapter. After adding the annotations, please add a simple method to return a string as our basic web service method: @ResponseBody @RequestMapping("/") public String sayAloha(){ return "Aloha"; } Finally, your code will look as follows: package com.packtpub.restapp.ticketmanagement; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.ResponseBody; @Configuration @EnableAutoConfiguration @ComponentScan @Controller public class TicketManagementApplication { @ResponseBody @RequestMapping("/") public String sayAloha(){ return "Aloha"; } public static void main(String[] args) { SpringApplication.run(TicketManagementApplication.class, args); } } Once all the coding changes are done, just run the project on Spring Boot App (Run As | Spring Boot App). You can verify the application has loaded by checking this message in the console: Tomcat started on port(s): 8080 (http) Once verified, you can check the API on the browser by simply typing localhost:8080. Check out the following screenshot: If you want to change the port number, you can configure a different port number in application.properties, which is in src/main/resources/application.properties. Check out the following screenshot: You read an excerpt from Building RESTful Web Services with Spring 5 - Second Edition written by Raja CSP Raman. From this book, you will learn to implement the REST architecture to build resilient software in Java. Check out other related posts: Starting with Spring Security Testing RESTful Web Services with Postman Applying Spring Security using JSON Web Token (JWT)
Read more
  • 0
  • 0
  • 29768

article-image-2019-stack-overflow-survey-quick-overview
Sugandha Lahoti
10 Apr 2019
5 min read
Save for later

2019 Stack Overflow survey: A quick overview

Sugandha Lahoti
10 Apr 2019
5 min read
The results of the 2019 Stack Overflow survey have just been published: 90,000 developers took the 20-minute survey this year. The survey shed light on some very interesting insights – from the developers’ preferred language for programming, to the development platform they hate the most, to the blockers to developer productivity. As the survey is quite detailed and comprehensive, here’s a quick look at the most important takeaways. Key highlights from the Stack Overflow Survey Programming languages Python again emerged as the fastest-growing programming language, a close second behind Rust. Interestingly, Python and Typescript achieved the same votes with almost 73% respondents saying it was their most loved language. Python was the most voted language developers wanted to learn next and JavaScript remains the most used programming language. The most dreaded languages were VBA and Objective C. Source: Stack Overflow Frameworks and databases in the Stack Overflow survey Developers preferred using React.js and Vue.js web frameworks while dreaded Drupal and jQuery. Redis was voted as the most loved database and MongoDB as the most wanted database. MongoDB’s inclusion in the list is surprising considering its controversial Server Side Public License. Over the last few months, Red Hat dropped support for MongoDB over this license, so did GNU Health Federation. Both of these organizations choose PostgreSQL over MongoDB, which is one of the reasons probably why PostgreSQL was the second most loved and wanted database of Stack Overflow Survey 2019. Source: Stack Overflow It’s interesting to see WebAssembly making its way in the popular technology segment as well as one of the top paying technologies. Respondents who use Clojure, F#, Elixir, and Rust earned the highest salaries Stackoverflow also did a new segment this year called "Blockchain in the real world" which gives insight into the adoption of Blockchain. Most respondents (80%) on the survey said that their organizations are not using or implementing blockchain technology. Source: Stack Overflow Developer lifestyles and learning About 80% of our respondents say that they code as a hobby outside of work and over half of respondents had written their first line of code by the time they were sixteen, although this experience varies by country and by gender. For instance, women wrote their first code later than men and non-binary respondents wrote code earlier than men. About one-quarter of respondents are enrolled in a formal college or university program full-time or part-time. Of professional developers who studied at the university level, over 60% said they majored in computer science, computer engineering, or software engineering. DevOps specialists and site reliability engineers are among the highest paid, most experienced developers most satisfied with their jobs, and are looking for new jobs at the lowest levels. The survey also noted that developers who are system admins or DevOps specialists are 25-30 times more likely to be men than women. Chinese developers are the most optimistic about the future while developers in Western European countries like France and Germany are among the least optimistic. Developers also overwhelmingly believe that Elon Musk will be the most influential person in tech in 2019. With more than 30,000 people responding to a free text question asking them who they think will be the most influential person this year, an amazing 30% named Tesla CEO Musk. For perspective, Jeff Bezos was in second place, being named by ‘only’ 7.2% of respondents. Although, this year the US survey respondents proportion of women, went up from 9% to 11%, it’s still a slow growth and points to problems with inclusion in the tech industry in general and on Stack Overflow in particular. When thinking about blockers to productivity, different kinds of developers report different challenges. Men are more likely to say that being tasked with non-development work is a problem for them, while gender minority respondents are more likely to say that toxic work environments are a problem. Stack Overflow survey demographics and diversity challenges This report is based on a survey of 88,883 software developers from 179 countries around the world. It was conducted between January 23 to February 14 and the median time spent on the survey for qualified responses was 23.3 minutes. The majority of survey respondents this year were people who said they are professional developers or who code sometimes as part of their work, or are students preparing for such a career. Majority of them were from the US, India, China and Europe. Stack Overflow acknowledged that their results did not represent racial disparities evenly and people of color continue to be underrepresented among developers. This year nearly 71% of respondents continued to be of White or European descent, a slight improvement from last year (74%). The survey notes that, “In the United States this year, 22% of respondents are people of color; last year 19% of United States respondents were people of color.” This clearly signifies that a lot of work is still needed to be done particularly for people of color, women, and underrepresented groups. Although, last year in August, Stack Overflow revamped its Code of Conduct to include more virtues around kindness, collaboration, and mutual respect. It also updated  its developers salary calculator to include 8 new countries. Go through the full report to learn more about developer salaries, job priorities, career values, the best music to listen to while coding, and more. Developers believe Elon Musk will be the most influential person in tech in 2019, according to Stack Overflow survey results Creators of Python, Java, C#, and Perl discuss the evolution and future of programming language design at PuPPy Stack Overflow is looking for a new CEO as Joel Spolsky becomes Chairman
Read more
  • 0
  • 0
  • 29753
article-image-npm-at-nodejs-interactive-2018-npm-6-the-rise-and-fall-of-javascript-frameworks-and-more
Bhagyashree R
16 Oct 2018
7 min read
Save for later

npm at Node+JS Interactive 2018: npm 6, the rise and fall of JavaScript frameworks, and more

Bhagyashree R
16 Oct 2018
7 min read
Last week, Laurie Voss, the co-founder and COO of npm, at the Node+JS Interactive 2018 event spoke about npm and the future of JavaScript. He discussed several development tools used within the npm community, best practices, frameworks that are on the rise, and frameworks that are dying. He found these answers with the help of 1.5 billion log events per day and the JavaScript Ecosystem Survey 2017 consisting of 16,000 JavaScript developers. This survey gives data about what JavaScript users are doing and where the community is going. Let’s see some of the key highlights of this talk: npm is secure, popular, and fast With more than 10 million users and 6 billion package downloads every week, npm has become ridiculously popular. According to GitHub Octoverse 2017, JavaScript is the top language on GitHub by opened pull request. 85% of the developers who write JavaScript are using npm which is rising rapidly to reach 100%. These developers write JavaScript applications that run on 73% of browsers, 70% of servers, 44% of mobile devices, and 6% of IoT/robotics. The stats highlight that, npm is the package manager for mainly web developers and 97% of the code in a modern web app is downloaded from npm. The current version of npm, that is, npm 6 was released in April this year. This release comes with improved performance and addresses the major concern highlighted by the JavaScript Ecosystem survey, that is, security. Here are the major improvements in npm 6: npm 6 is super fast npm is now 20% faster than npm 4, so it is time for you to upgrade! You can do that using this command: npm install npm -g According to Laurie, npm is fast, and so is yarn. In fact, all of the package managers are nearly at the same speed now. This is the result of the makers of all the package managers coming together to make a community called package.community. This community of package manager maintainers and users are focused towards making package managers better, working on compatibility, and supporting each other. npm 6 locks by default This was one of the biggest changes in npm 6, which makes sure that what you have in the development environment is exactly what you put in production. This functionality is facilitated by a new file called package-lock.json. This so-called “lock file” saves information about your node_modules/ tree since the time you last edited your dependencies. This new feature comes with a number of benefits including increased reproducibility across teams, reduced network overhead when installing, and making it easier to debug issues with dependencies. npm ci This command is similar to npm install, but is meant to be used in automated environments. It is primarily used in environments such as test platforms, continuous integration, and deployment. It can be about 2-3x faster than regular npm install by skipping certain user-oriented features. It is also stricter than a regular install, which helps in catching errors or inconsistencies caused by the incrementally-installed local environments of most npm users. Advances in npm security Two-factor authentication In order to provide strong digital security, npm now has two-factor authentication (2FA). Two-factor authentication confirms your identity using two methods: Something you know such as, your username and password Something you have such as, a phone or tablet Quick audits Quick audits tells you whether the packages you are installing are secure or not. These security warnings will be more detailed and useful in npm 6 as compared to previous versions. The talk highlighted that currently quick audits are happening a lot (about 3.5 million scans per week!). These audit shows that 11% of the packages installed by developers have critical vulnerability: Source: npm To know the vulnerabilities that exist in your app and how critical they are, run the following commands: npm audit: This automatically runs when you install a package with npm install. It submits a description of the dependencies configured in your package to your default registry and asks for a report of known vulnerabilities. npm audit fix: Run the npm audit fix subcommand to automatically install compatible updates to vulnerable dependencies. The rise and fall of frameworks After speaking about the current status of npm, Laurie moved on to explaining what npm users are doing, which frameworks they are using, and which frameworks they are no longer interested to use. Not all npm users develop JavaScript applications The interesting thing here was that, though most of the JavaScript users use npm, it is not that they are the only npm users. Developers writing applications in other languages also use npm. These languages include Java, PHP, Python, and C#, among others: Source: npm Comparing various frameworks Next, he discussed the tools developers are currently opting for. This comparison was done on the basis of a metrics called share of registry. Share of registry shows the relative popularity of a package with other packages in the registry. Frontend frameworks “No framework dies, they only fade away with time.” To prove the above statement the best example would be Backbone. As compared to 2013, Backbone’s downloads has come down rapidly and not many users are using it now. Most of the developers are maintaining old applications written in Backbone and not writing new applications with it. So, what framework are they using? 60% of the respondents of the survey are using React. Despite a huge fraction gravitating towards React, its growth is slowing down a little. Angular is also an extremely popular framework. Ember is making a comeback now after facing a rough patch during 2016-2017. The next framework is Vue, which seems to be just taking off and is probably the reason behind the slowing growth of React. Here’s the graph showing the relative download growth of the frontend frameworks: Source: npm Backend frameworks Comparing backend frameworks was fairly easy, as Express was the clear winner: Source: npm But once Express is taken out of the picture, Koa seems to be the second most popular framework. With the growing use of server-side JavaScript, there is a rapid decline in the use of Sails. While, Hapi, another backend framework, is doing very well in absolute terms, it is not growing much in relative terms. Next.js is growing but with a very low pace. The following graph shows the relative growth of these backed frameworks: Source: npm Some predictions based on the survey It would be unwise to bet against React as it has tons of users and tons of modules Angular is a safer but less interesting choice Keep an eye on Next.js If you are looking for something new to learn, go for GraphQL With 46% of npm users using TypeScript for transpiling it is surely worth your attention WASM seems promising No matter what happens to JavaScript, npm is here to stay To conclude Laurie, rightly said, no framework is here forever: “Nothing lasts forever!..Any framework that we see today will have its hay days and then it will have an after-life where it will slowly slowly degrade.” To  watch the full talk, check out this YouTube video: npm and the future of JavaScript. npm v6 is out! React 16.5.0 is now out with a new package for scheduling, support for DevTools, and more! Node.js and JS Foundation announce intent to merge; developers have mixed feelings
Read more
  • 0
  • 0
  • 29645

article-image-responsive-web-design-wordpress
Packt
09 Jul 2015
13 min read
Save for later

Responsive Web Design with WordPress

Packt
09 Jul 2015
13 min read
Welcome to the world of the Responsive Web Design! This article is written by Dejan Markovic, author of the book WordPress Responsive Theme Design, and it will introduce you to the Responsive Web Design and its concepts and techniques. It will also present crisp notes from WordPress Responsive Theme Design. (For more resources related to this topic, see here.) Responsive web design (RWD) is a web design approach aimed at crafting sites to provide an optimal viewing experience—easy reading and navigation with a minimum of resizing, panning, and scrolling—across a wide range of devices (from mobile phones to desktop computer monitors). Reference: http://en.wikipedia.org/wiki/Responsive_web_design. To say it simply, responsive web design (RWD) means that the responsive website should adapt to the screen size of the device it is being viewed on. When I began my web development journey in 2002, we didn't have to consider as many factors as we do today. We just had to create the website for a 17-inch screen (which was the standard at that time), and that was it. Yes, we also had to consider 15, 19, and 21-inch monitors, but since the 17-inch screen was the standard, that was the target screen size for us. In pixels, these sizes were usually 800 or 1024. We also had to consider a fewer number of browsers (Internet Explorer, Netscape, and Opera) and the styling for the print, and that was it. Since then, a lot of things have changed, and today, in 2015, for a website design, we have to consider multiple factors, such as: A lot of different web browsers (Internet Explorer, Firefox, Opera, Chrome, and Safari) A number of different operating systems (Windows (XP, 7, and 8), Mac OS X, Linux, Unix, iOS, Android, and Windows phones) Device screen sizes (desktop, mobile, and tablet) Is content accessible and readable with screen readers? How the content will look like when it's printed? Today, creating different design for all these listed factors & devices would take years. This is where a responsive web design comes to the rescue. The concepts of RWD I have to point out that the mobile environment is becoming more important factor than the desktop environment. Mobile browsing is becoming bigger than the desktop-based access, which makes the mobile environment very important factor to consider when developing a website. Simply put, the main point of RWD is that the layout changes based on the size and capabilities of the device its being viewed on. The concepts of RWD, that we will learn next, are: Viewport, scaling and screen density. Controlling Viewport On the desktop, Viewport is the screen size of the window in a browser. For example, when we resize the browser window, we are actually changing the Viewport size. On mobile devices, the Viewport size is also independent of the device screen size. For example, Viewport is 850 px for mobile Opera and 980 px for mobile Safari, and the screen size for iPhone is 320 px. If we compare the Viewport size of 980 px and the screen size of an iPhone of 320 px, we can see that Viewport is bigger than the screen size. This is because mobile browsers function differently. They first load the page into Viewport, and then they resize it to the device's screen size. This is why we are able to see the whole page on the mobile device. If the mobile browsers had Viewport the same as the screen size (320 px), we would be able to see only a part of the page on the mobile device. In the following screenshot, we can see the table with the list of Viewport sizes for some iPhone models: We can control Viewport with CSS: @viewport {width: device-width;} Or, we can control it with the meta tag: <meta name="viewport" content="width=device-width"> In the preceding code, we are matching the Viewport width with the device width. Because the Viewport meta tag approach is more widely adopted, as it was first used on iOS and the @viewport approach was not supported by some browsers, we will use the meta tag approach. We are setting the Viewport width in order to match our web content with our mobile content, as we want to make sure that our web content looks good on a mobile device as well. We can set Viewports in the code for each device separately, for example, 320 px for the iPhone. The better approach will be to use content="width=device-width". Scaling Scaling is extremely important, as the initial scale controls the zoom aspect of the content for the initial look of the page. For example, if the initial scale is set to 3, the content will be loaded in the size of 3 times of the Viewport size, which means 3 times zoom. Here is the look of the screenshot for initial-scale=1 and initial-scale=3: As we can see from the preceding screenshots, on the initial scale 3 (three times zoom), the logo image takes the bigger part of the screen. It is important to note that this is just the initial scale, which means that the user can zoom in and zoom out later, if they want to. Here is the example of the code with the initial scale: <meta name="viewport" content="width=device-width, initial- scale=1, maximum-scale=1"> In this example, we have used the maximum-scale=1 option, which means that the user will not be able to use the zoom here. We should avoid using the maximum-scale property because of accessibility issues. If we forbid zooming on our pages, users with visual problems will not be able to see the content properly. The screen density As the screen technology is going forward every year or even faster than that, we have to consider the screen density aspect as well. Screen density is the number of pixels that are contained within a screen area. This means that if the screen density is higher, we can have more details, in this case, pixels in the same area. There are two measurements that are usually used for this, dots per inch (DPI) and pixels per inch (PPI). DPI means how many drops a printer can place in an inch of a space. PPI is the number of pixels we can have in one inch of the screen. If we go back to the preceding screenshot with the table where we are showing Viewports and densities and compare the values of iPhone 3G and iPhone 4S, we will see that the screen size stayed the same at 3.5 inch, Viewport stayed the same at 320 px, but the screen density has doubled, from 163 dpi to 326 dpi, which means that the screen resolution also has doubled from 320x480 to 640x960. The screen density is very relevant to RWD, as newer devices have bigger densities and we should do our best to cover as many densities as we can in order to provide a better experience for end users. Pixels' density matters more than the resolution or screen size, because more pixels is equal to sharper display: There are topics that need to be taken into consideration, such as hardware, reference pixels, and the device-pixel-ratio, too. Problems and solutions with the screen density Scalable vector graphics and CSS graphics will scale to the resolution. This is why I recommend using Font Awesome icons in your project. Font Awesome icons are available for download at: http://fortawesome.github.io/Font-Awesome/icons/. Font Icons is a font that is made up of symbols, icons, or pictograms (whatever you prefer to call them) that you can use in a webpage just like a font. They can be instantly customized with properties like: size, drop shadow, or anything you want can be done with the power of CSS. The real problem triggered by the change in the screen density is images, as for high-density screens, we should provide higher resolution images. There are several ways through which we can approach this problem: By targeting high-density screens (providing high-resolution images to all screens) By providing high-resolution images where appropriate (loading high-resolution images only on devices with high-resolution screens) By not using high-resolution images For the beginner developers I will recommend using second approach, providing high-resolution images where appropriate. Techniques in RWD RWD consists of three coding techniques: Media queries (adapt content to specific screen sizes) Fluid grids (for flexible layouts) Flexible images and media (that respond to changes to screen sizes) More detailed information about RWD techniques by Ethan Marcote, who coined the term Reponsive Web Design, is available at http://alistapart.com/article/responsive-web-design. Media queries Media queries are CSS modules, or as some people like to say, just a conditional statements, which are telling tells the browsers to use a specific type of style, depending on the size of the screen and other factors, such as print (specific styles for print). They are here for a long time already, as I was using different styles for print in 2002. If you wish to know more about media queries, refer to W3C Candidate Recommendation 8 July 2002 at http://www.w3.org/TR/2002/CR-css3-mediaqueries-20020708/. Here is an example of media query declaration: @media only screen and (min-width:500px) { font-family: sans-serif; } Let's explain the preceding code: The @media code means that it is a media type declaration. The screen and part of the query is an expression or condition (in this case, it means only screen and no print). The following conditional statement means that everything above 500 px will have the font family of sans serif: (min-width:500px) { font-family: sans-serif; } Here is another example of a media query declaration: @media only screen and (min-width: 500px), screen and (orientation: portrait) { font-family: sans-serif; } In this case, if we have two statements and if one of the statements is true, the entire declaration is applied (either everything above 50 px or the portrait orientation will be applied to the screen). The only keyword hides the styles from older browsers. As some older browsers don't support media queries, I recommend using a respond.js script, which will "patch" support for them. Polyfill (or polyfiller) is code that provides features that are not built or supported by some web browsers. For example, a number of HTML5 features are not supported by older versions of IE (older than 8 or 9), but these features can be used if polyfill is installed on the web page. This means that if the developer wants to use these features, he/she can just include that polyfill library and these features will work in older browsers. Breakpoints Breakpoint is a moment when layout switches, from one layout to another, when some condition is fulfilled, for example, the screen has been resized. Almost all responsive designs cover the changes of the screen between the desktop, tablets, and smart phones. Here is an example with comments inside: @media only screen and (max-width: 480px) { //mobile styles // up to 480px size } Media query in the preceding code will only be used if the width of the screen is 480 px or less. @media only screen and (min-width:481px) and (max-width: 768px) { //tablet styles //between 481 and 768px } Media query in the preceding code will only be used if the width of the screen is between the 481 px and 768 px. @media only screen and (min-width:769px) { //desktop styles //from 769px and up } Media query in the preceding code will only be used when the width of the screen is 769 px and more. The minimum width value in desktop styles is 1 pixel over the maximum width value in tablet styles, and the same difference is there between values from tablet and mobile styles. We are doing this in order to avoid overlapping, as that could cause problem with our styles. There is also an approach to set the maximum width and minimum width with em values. Setting em of the screen for maximum will mean that the width of the screen is set relative to the device's font size. If the font size for the device is 16 px (which is the usual size), the maximum width for mobile styles would be 480/16=30. Why do we use em values? With pixel sizes, everything is fixed; for example, h1 is 19 px (or 1.5 em of the default size of 16 px), and that's it. With em sizes, everything is relative, so if we change the default value in the browser from, for example, 16 px to 18 px, everything relative to that will change. Therefore, all h1 values will change from 19 px to 22 px and make our layout "zoomable". Here is the example with sizes changed to em: @media only screen and (max-width: 30em) { //mobile styles // up to 480px size }   @media only screen and (min-width:30em) and (max-width: 48em) { //tablet styles //between 481 and 768px }   @media only screen and (min-width:48em) { //desktop styles //from 769px and up } Fluid grids The major point in RWD is that the content should adapt to any screen it's viewed on. One of the best solutions to do this is to use fluid layouts where our content can be resized on each breakpoint. In fluid grids, we define a maximum layout size for the design. The grid is divided into a specific number of columns to keep the layout clean and easy to handle. Then we design each element with proportional widths and heights instead of pixel based dimensions. So whenever the device or screen size is changed, elements will adjust their widths and heights by the specified proportions to its parent container. Reference: http://www.1stwebdesigner.com/tutorials/fluid-grids-in-responsive-design/. To make the grid flexible (or elastic), we can use the % points, or we can use the em values, whichever suits us better. We can make our own fluid grids, or we can use grid frameworks. As there are so many frameworks available, I would recommend that you use the existing framework rather than building your own. Grid frameworks could use a single grid that covers various screen sizes, or we can have multiple grids for each of the break points or screen size categories, such as mobiles, tablets, and desktops. Some of the notable frameworks are Twitter's Bootstrap, Foundation, and SemanticUI. I prefer Twitter's Bootstrap, as it really helps me speed up the process and it is the most used framework currently. Flexible images and media Last but not the least important, are images and media (videos). The problem with them is that they are elements that come with fixed sizes. There are several approaches to fix this: Replacing dimensions with percentage values Using maximum widths Using background images only for some cases, as these are not good for accessibility Using some libraries, such as Scott Jehl's picturefill (https://github.com/scottjehl/picturefill) Taking out the width and height parameters from the image tag and dealing with dimensions in CSS Summary In this article, you learned about the RWD concepts such as: Viewport, scaling and the screen density. Also, we have covered the RWD techniques: media queries, fluid grids, and flexible media. Resources for Article: Further resources on this subject: Deployment Preparations [article] Why Meteor Rocks! [article] Clustering and Other Unsupervised Learning Methods [article]
Read more
  • 0
  • 0
  • 29501
Modal Close icon
Modal Close icon