Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-using-model-serializers-eliminate-duplicate-code
Packt
23 Sep 2016
12 min read
Save for later

Using model serializers to eliminate duplicate code

Packt
23 Sep 2016
12 min read
In this article by Gastón C. Hillar, author of, Building RESTful Python Web Services, we will cover the use of model serializers to eliminate duplicate code and use of default parsing and rendering options. (For more resources related to this topic, see here.) Using model serializers to eliminate duplicate code The GameSerializer class declares many attributes with the same names that we used in the Game model and repeats information such as the types and the max_length values. The GameSerializer class is a subclass of the rest_framework.serializers.Serializer, it declares attributes that we manually mapped to the appropriate types, and overrides the create and update methods. Now, we will create a new version of the GameSerializer class that will inherit from the rest_framework.serializers.ModelSerializer class. The ModelSerializer class automatically populates both a set of default fields and a set of default validators. In addition, the class provides default implementations for the create and update methods. In case you have any experience with Django Web Framework, you will notice that the Serializer and ModelSerializer classes are similar to the Form and ModelForm classes. Now, go to the gamesapi/games folder folder and open the serializers.py file. Replace the code in this file with the following code that declares the new version of the GameSerializer class. The code file for the sample is included in the restful_python_chapter_02_01 folder. from rest_framework import serializers from games.models import Game class GameSerializer(serializers.ModelSerializer): class Meta: model = Game fields = ('id', 'name', 'release_date', 'game_category', 'played') The new GameSerializer class declares a Meta inner class that declares two attributes: model and fields. The model attribute specifies the model related to the serializer, that is, the Game class. The fields attribute specifies a tuple of string whose values indicate the field names that we want to include in the serialization from the related model. There is no need to override either the create or update methods because the generic behavior will be enough in this case. The ModelSerializer superclass provides implementations for both methods. We have reduced boilerplate code that we didn’t require in the GameSerializer class. We just needed to specify the desired set of fields in a tuple. Now, the types related to the game fields is included only in the Game class. Press Ctrl + C to quit Django’s development server and execute the following command to start it again. python manage.py runserver Using the default parsing and rendering options and move beyond JSON The APIView class specifies default settings for each view that we can override by specifying appropriate values in the gamesapi/settings.py file or by overriding the class attributes in subclasses. As previously explained the usage of the APIView class under the hoods makes the decorator apply these default settings. Thus, whenever we use the decorator, the default parser classes and the default renderer classes will be associated with the function views. By default, the value for the DEFAULT_PARSER_CLASSES is the following tuple of classes: ( 'rest_framework.parsers.JSONParser', 'rest_framework.parsers.FormParser', 'rest_framework.parsers.MultiPartParser' ) When we use the decorator, the API will be able to handle any of the following content types through the appropriate parsers when accessing the request.data attribute. application/json application/x-www-form-urlencoded multipart/form-data When we access the request.data attribute in the functions, Django REST Framework examines the value for the Content-Type header in the incoming request and determines the appropriate parser to parse the request content. If we use the previously explained default values, Django REST Framework will be able to parse the previously listed content types. However, it is extremely important that the request specifies the appropriate value in the Content-Type header. We have to remove the usage of the rest_framework.parsers.JSONParser class in the functions to make it possible to be able to work with all the configured parsers and stop working with a parser that only works with JSON. The game_list function executes the following two lines when request.method is equal to 'POST': game_data = JSONParser().parse(request) game_serializer = GameSerializer(data=game_data) We will remove the first line that uses the JSONParser and we will pass request.data as the data argument for the GameSerializer. The following line will replace the previous lines: game_serializer = GameSerializer(data=request.data) The game_detail function executes the following two lines when request.method is equal to 'PUT': game_data = JSONParser().parse(request) game_serializer = GameSerializer(game, data=game_data) We will make the same edits done for the code in the game_list function. We will remove the first line that uses the JSONParser and we will pass request.data as the data argument for the GameSerializer. The following line will replace the previous lines: game_serializer = GameSerializer(game, data=request.data) By default, the value for the DEFAULT_RENDERER_CLASSES is the following tuple of classes: ( 'rest_framework.renderers.JSONRenderer', 'rest_framework.renderers.BrowsableAPIRenderer', ) When we use the decorator, the API will be able to render any of the following content types in the response through the appropriate renderers when working with the rest_framework.response.Response object. application/json text/html By default, the value for the DEFAULT_CONTENT_NEGOTIATION_CLASS is the rest_framework.negotiation.DefaultContentNegotiation class. When we use the decorator, the API will use this content negotiation class to select the appropriate renderer for the response based on the incoming request. This way, when a request specifies that it will accept text/html, the content negotiation class selects the rest_framework.renderers.BrowsableAPIRenderer to render the response and generate text/html instead of application/json. We have to replace the usages of both the JSONResponse and HttpResponse classes in the functions with the rest_framework.response.Response class. The Response class uses the previously explained content negotiation features, renders the received data into the appropriate content type and returns it to the client. Now, go to the gamesapi/games folder folder and open the views.py file. Replace the code in this file with the following code that removes the JSONResponse class, uses the @api_view decorator for the functions and the rest_framework.response.Response class. The modified lines are highlighted. The code file for the sample is included in the restful_python_chapter_02_02 folder. from rest_framework.parsers import JSONParser from rest_framework import status from rest_framework.decorators import api_view from rest_framework.response import Response from games.models import Game from games.serializers import GameSerializer @api_view(['GET', 'POST']) def game_list(request): if request.method == 'GET': games = Game.objects.all() games_serializer = GameSerializer(games, many=True) return Response(games_serializer.data) elif request.method == 'POST': game_serializer = GameSerializer(data=request.data) if game_serializer.is_valid(): game_serializer.save() return Response(game_serializer.data, status=status.HTTP_201_CREATED) return Response(game_serializer.errors, status=status.HTTP_400_BAD_REQUEST) @api_view(['GET', 'PUT', 'POST']) def game_detail(request, pk): try: game = Game.objects.get(pk=pk) except Game.DoesNotExist: return Response(status=status.HTTP_404_NOT_FOUND) if request.method == 'GET': game_serializer = GameSerializer(game) return Response(game_serializer.data) elif request.method == 'PUT': game_serializer = GameSerializer(game, data=request.data) if game_serializer.is_valid(): game_serializer.save() return Response(game_serializer.data) return Response(game_serializer.errors, status=status.HTTP_400_BAD_REQUEST) elif request.method == 'DELETE': game.delete() return Response(status=status.HTTP_204_NO_CONTENT) After you save the previous changes, run the following command: http OPTIONS :8000/games/ The following is the equivalent curl command: curl -iX OPTIONS :8000/games/ The previous command will compose and send the following HTTP request: OPTIONS http://localhost:8000/games/. The request will match and run the views.game_list function, that is, the game_list function declared within the games/views.py file. We added the @api_view decorator to this function, and therefore, it is capable of determining the supported HTTP verbs, parsing and rendering capabilities. The following lines show the output: HTTP/1.0 200 OK Allow: GET, POST, OPTIONS, PUT Content-Type: application/json Date: Thu, 09 Jun 2016 21:35:58 GMT Server: WSGIServer/0.2 CPython/3.5.1 Vary: Accept, Cookie X-Frame-Options: SAMEORIGIN { "description": "", "name": "Game Detail", "parses": [ "application/json", "application/x-www-form-urlencoded", "multipart/form-data" ], "renders": [ "application/json", "text/html" ] } The response header includes an Allow key with a comma-separated list of HTTP verbs supported by the resource collection as its value: GET, POST, OPTIONS. As our request didn’t specify the allowed content type, the function rendered the response with the default application/json content type. The response body specifies the Content-type that the resource collection parses and the Content-type that it renders. Run the following command to compose and send and HTTP request with the OPTIONS verb for a game resource. Don’t forget to replace 3 with a primary key value of an existing game in your configuration: http OPTIONS :8000/games/3/ The following is the equivalent curl command: curl -iX OPTIONS :8000/games/3/ The previous command will compose and send the following HTTP request: OPTIONS http://localhost:8000/games/3/. The request will match and run the views.game_detail function, that is, the game_detail function declared within the games/views.py file. We also added the @api_view decorator to this function, and therefore, it is capable of determining the supported HTTP verbs, parsing and rendering capabilities. The following lines show the output: HTTP/1.0 200 OK Allow: GET, POST, OPTIONS Content-Type: application/json Date: Thu, 09 Jun 2016 20:24:31 GMT Server: WSGIServer/0.2 CPython/3.5.1 Vary: Accept, Cookie X-Frame-Options: SAMEORIGIN { "description": "", "name": "Game List", "parses": [ "application/json", "application/x-www-form-urlencoded", "multipart/form-data" ], "renders": [ "application/json", "text/html" ] } The response header includes an Allow key with comma-separated list of HTTP verbs supported by the resource as its value: GET, POST, OPTIONS, PUT. The response body specifies the content-type that the resource parses and the content-type that it renders, with the same contents received in the previous OPTIONS request applied to a resource collection, that is, to a games collection. When we composed and sent POST and PUT commands, we had to use the use the -H "Content-Type: application/json" option to indicate curl to send the data specified after the -d option as application/json instead of the default application/x-www-form-urlencoded. Now, in addition to application/json, our API is capable of parsing application/x-www-form-urlencoded and multipart/form-data data specified in the POST and PUT requests. Thus, we can compose and send a POST command that sends the data as application/x-www-form-urlencoded with the changes made to our API. We will compose and send an HTTP request to create a new game. In this case, we will use the -f option for HTTPie that serializes data items from the command line as form fields and sets the Content-Type header key to the application/x-www-form-urlencoded value. http -f POST :8000/games/ name='Toy Story 4' game_category='3D RPG' played=false release_date='2016-05-18T03:02:00.776594Z' The following is the equivalent curl command. Notice that we don’t use the -H option and curl will send the data in the default application/x-www-form-urlencoded: curl -iX POST -d '{"name":"Toy Story 4", "game_category":"3D RPG", "played": "false", "release_date": "2016-05-18T03:02:00.776594Z"}' :8000/games/ The previous commands will compose and send the following HTTP request: POST http://localhost:8000/games/ with the Content-Type header key set to the application/x-www-form-urlencoded value and the following data: name=Toy+Story+4&game_category=3D+RPG&played=false&release_date=2016-05-18T03%3A02%3A00.776594Z The request specifies /games/, and therefore, it will match '^games/$' and run the views.game_list function, that is, the updated game_detail function declared within the games/views.py file. As the HTTP verb for the request is POST, the request.method property is equal to 'POST', and therefore, the function will execute the code that creates a GameSerializer instance and passes request.data as the data argument for its creation. The rest_framework.parsers.FormParser class will parse the data received in the request, the code creates a new Game and, if the data is valid, it saves the new Game. If the new Game was successfully persisted in the database, the function returns an HTTP 201 Created status code and the recently persisted Game serialized to JSON in the response body. The following lines show an example response for the HTTP request, with the new Game object in the JSON response: HTTP/1.0 201 Created Allow: OPTIONS, POST, GET Content-Type: application/json Date: Fri, 10 Jun 2016 20:38:40 GMT Server: WSGIServer/0.2 CPython/3.5.1 Vary: Accept, Cookie X-Frame-Options: SAMEORIGIN { "game_category": "3D RPG", "id": 20, "name": "Toy Story 4", "played": false, "release_date": "2016-05-18T03:02:00.776594Z" } After the changes we made in the code, we can run the following command to see what happens when we compose and send an HTTP request with an HTTP verb that is not supported: http PUT :8000/games/ The following is the equivalent curl command: curl -iX PUT :8000/games/ The previous command will compose and send the following HTTP request: PUT http://localhost:8000/games/. The request will match and try to run the views.game_list function, that is, the game_list function declared within the games/views.py file. The @api_view decorator we added to this function doesn’t include 'PUT' in the string list with the allowed HTTP verbs, and therefore, the default behavior returns a 405 Method Not Allowed status code. The following lines show the output with the response from the previous request. A JSON content provides a detail key with a string value that indicates the PUT method is not allowed. HTTP/1.0 405 Method Not Allowed Allow: GET, OPTIONS, POST Content-Type: application/json Date: Sat, 11 Jun 2016 00:49:30 GMT Server: WSGIServer/0.2 CPython/3.5.1 Vary: Accept, Cookie X-Frame-Options: SAMEORIGIN { "detail": "Method "PUT" not allowed." } Summary This article covers the use of model serializers and how it is effective in removing duplicate code. Resources for Article: Further resources on this subject: Making History with Event Sourcing [article] Implementing a WCF Service in the Real World [article] WCF – Windows Communication Foundation [article]
Read more
  • 0
  • 0
  • 9087

article-image-exporting-data-ms-access-2003-mysql
Packt
07 Oct 2009
4 min read
Save for later

Exporting data from MS Access 2003 to MySQL

Packt
07 Oct 2009
4 min read
Introduction It is assumed that you have a working copy of MySQL which you can use to work with this article. The MySQL version used in this article came with the XAMPP download. XAMPP is an easy to install (and use) Apache distribution containing MySQL, PHP, and Perl. The distribution used in this article is XAMPP for Windows. You can find documentation here. Here is a screen shot of the XAMPP control panel where you can turn the services on and off and carry out other administrative tasks. You need to follow the steps indicated here: Create a database in MySQL to which you will export a table from Microsoft Access 2003 Create a ODBC DSN that helps you connecting Microsoft Access to MySQL Export the table or tables Verify the exported items Creating a database in MySQL You can create a database in MySQL by using the command 'Create Database' in MySQL or using a suitable graphic user interface such as MySQL workbench. You will have to refer to documentation that works with your version of MySQL. Herein the following version was used. The next listing shows how a database named TestMove was created in MySQL starting from the bin folder of the MySQL program folder. Follow the commands and the response from the computer. The Listing 1 and the folders are appropriate for my computer and you may find it in your installation directory. The databases you will be seeing will be different from what you see here except for those created by the installation. Listing 1: Login and create a database Microsoft Windows XP [Version 5.1.2600](C) Copyright 1985-2001 Microsoft Corp.C:Documents and SettingsJayaram Krishnaswamy>cdC:>cd xamppmysqlbinC:xamppmysqlbin>mysql -h localhost -u root -pEnter password: *********Welcome to the MySQL monitor. Commands end with ; or g.Your MySQL connection id is 2Server version: 5.1.30-community MySQL Community Server (GPL)Type 'help;' or 'h' for help. Type 'c' to clear the buffer.mysql> show databases;+--------------------+| Database |+--------------------+| information_schema || cdcol || expacc || mengerie || mydb || mysql || phpmyadmin || test || testdemo || webauth |+--------------------+10 rows in set (0.23 sec)mysql> create database TestMove;Query OK, 1 row affected (0.00 sec)mysql> show databases;+--------------------+| Database |+--------------------+| information_schema || cdcol || expacc || mengerie || mydb || mysql || phpmyadmin || test || testdemo || testmove || webauth |+--------------------+11 rows in set (0.00 sec)mysql> The login detail that works error free is shown. The preference for host name is localhost v/s either the Machine Name (in this case Hodentek2) or the IP address. The first 'Show Databases' command does not display the TestMove we created which you can see in response to the 2nd Show Databases command. In windows the commands are not case sensitive. Creating an ODBC DSN to connect to MySQL When you install from XAMPP you will also be installing an ODBC driver for MySQL for the version of MySQL included in the bundle. In the MySQL version used for this article the version is MySQL ODBC5.1 and the file name is MyODBC5.dll. Click Start | Control Panel | Administrative Tools | Data Sources (ODBC) and open the ODBC Data Source Administrator window as shown. The default tab is User DSN. Change to System DSN as shown here. Click the Add... button to open the Create New Data Source window as shown. Scroll down and choose MySQL ODBC 5.1 Driver as the driver and click Finish. The MySQL Connector/ODBC Data Source Configuration window shows up. You will have to provide a Data Source Name (DSN) and a description. The server is the localhost. You must have your User Name/Password information to proceed further. The database is the name of the database you created earlier (TestMove) and this should show up in the drop-down list if the rest of the information is correct. Accept the default port. If all information is correct the Test button gets enabled. Click and test the connection using the Test button. You should get a response as shown. Click the OK button on the Test Result window. Click OK on the MySQL Connector/ODBC Data Source Configuration window. There are a number of other flags that you can set up using the 'Details' button. The defaults are acceptable for this article. You have successfully created a System DSN 'AccMySQL' as shown in the next window. Click OK. Verify the contents of TestMove The TestMove is a new database created in MySQL and as such it is empty as you verify in the following listing. Listing 2: Database TestMove is empty mysql> use testmove;Database changedmysql> show tables;Empty set (0.00 sec)mysql>
Read more
  • 0
  • 0
  • 9070

article-image-programmatically-creating-ssrs-report-microsoft-sql-server-2008
Packt
09 Oct 2009
4 min read
Save for later

Programmatically Creating SSRS Report in Microsoft SQL Server 2008

Packt
09 Oct 2009
4 min read
Introduction In order to design the MS SQL Server Reporting Services report programmatically you need to understand what goes into a report. We will start with a simple report shown in the next figure: The above tabular report gets its data from the SQL Server database TestNorthwind using the query shown below: Select EmployeeID, LastName, FirstName, City, Country from Employees. A report is based completely on a Report Definition file, a file in XML format. The file consists of information about the data connection, the datasource in which a dataset is defined, and the layout information together with the data bindings to the report. In the following, we will be referring to the Report Server file called RDLGenSimple.rdl. This is a file written in Report Definition Language in XML Syntax. The next figure shows this file opened as an XML file with the significant nodes collapsed. Note the namespace references. The significant items are the following: The XML Processing instructions The root element of the report collapsed and contained in the root element are: The DataSources Datasets Contained in the body are the ReportItems This is followed by the Page containing the PageHeader and PageFooter items In order to generate a RDL file of the above type the XMLTextWriter class will be used in Visual Studio 2008. In some of the hands-on you have seen how to connect to the SQL Server programmatically as well as how to retrieve data using the ADO.NET objects. This is precisely what you will be doing in this hands-on exercise. The XMLTextWriter Class In order to review the properties of the XMLTextWriter you need to add a reference to the project (or web site) indicating this item. This is carried out by right-clicking the Project (or Website) | Add Reference… and then choosing SYSTEM.XML (http://msdn.microsoft.com/en-us/library/system.xml.aspx) in the Add Reference window. After adding the reference, the ObjectBrowser can be used to look at the details of this class as shown in the next figure. You can access this from View | Object Browser, or by clicking the F2 key with your VS 2008 IDE open. A formal description of this can be found at the bottom of the next figure. The XMLTextWriter takes care of all the elements found in the XML DOM model (see for example, http://www.devarticles.com/c/a/XML/Roaming-through-XMLDOM-An-AJAX-Prerequisite). Hands-on exercise: Generating a Report Definition Language file using Visual Studio 2008 In this hands-on, you will be generating a server report that will display the report shown in the first figure. The coding you will be using is adopted from this article (http://technet.microsoft.com/en-us/library/ms167274.aspx) available  at Microsoft TechNet (http://technet.microsoft.com/en-us/sqlserver/default.aspx). Follow on In this section, you will create a project and add a reference. You add code to the page that is executed by the button click events. The code is scripted and is not generated by any tool. Create project and add reference You will create a Visual Studio 2008 Windows Forms Application and add controls to create a simple user interface for testing the code. Create a Windows Forms Application project in Visual Studio 2008 from File | New | Project… by providing a name. Herein, it is called RDLGen2. Drag-and-drop two labels, three buttons and two text boxes onto the form  as shown: When the Test Connection button Button1 in the code is clicked, a connection to the TestNorthwind database will be made. When the button is clicked, the code in the procedure Connection () is executed. If there are any errors, they will show up in the label at the bottom. When the Get list of Fields button Button2 in the code is clicked, the Query will be run against the database and the retrieved field list will be shown in the adjoining textbox. The Generate a RDL file button Button 3 in the code, creates a report file at the location indicated in the code.
Read more
  • 0
  • 0
  • 9056

article-image-ibm-lotus-domino-exploring-view-options-web
Packt
11 May 2011
8 min read
Save for later

IBM Lotus Domino: exploring view options for the web

Packt
11 May 2011
8 min read
Views are important to most Domino applications. They provide the primary means by which documents are located and retrieved. But working with views on the Web is often more complicated or less satisfactory than using views with the Notes client. Several classic view options are available for web applications, all of which have draw-backs and implementation issues. A specific view can be displayed on the Web in several different ways. So it is helpful to consider view attributes that influence design choices, in particular: View content View structure How a view is translated for the Web How a view looks in a browser Whether or not a view template is used View performance Document hierarchy In terms of content, a view contains: Data only Data and HTML tags In terms of structure, views are: Uncategorized Categorized In terms of the techniques used by Domino to translate views for the Web, there are four basic methods: Domino-generated HTML (the default) Developer-generated HTML (the view contains data and HTML tags) View Applet (used with data only views) XML (the view is transmitted to the browser as an XML document) The first three of these methods are easier to implement. Two options on the Advanced tab of View Properties control which of these three methods is used: Treat view contents as HTML Use applet in the browser If neither option is checked, then Domino translates the view into an HTML table and then sends the page to the browser. If Treat view contents as HTML is checked, then Domino sends the view to the browser as is, assuming that the developer has encoded HTML table tags in the view. If Use applet in the browser is checked, then Domino uses the Java View Applet to display the view. (As mentioned previously, the Java Applets can be slow to load, and they do require a locally installed JVM (Java Virtual Machine)). Using XML to display views in a browser is a more complicated proposition, and we will not deal with it here. Pursue this and other XML-related topics in Designer Help or on the Web. Here is a starting point: http://www.ibm.com/developerworks/xml/ In terms of how a view looks when displayed in a browser, two alternatives can be used: Native styling with Designer Styling with Cascading Style Sheets In terms of whether or not a view template is used, there are three choices: A view template is not used The default view template is used A view template created for a specific view is used Finally, view performance can be an issue for views with many: Documents Columns Column formulas Column sorting options Each view is indexed and refreshed according to a setting on the Advanced tab of View Properties. By default, view indices are set to refresh automatically when documents are added or deleted. If the re-indexing process takes longer, then application response time can suffer. In general, smaller and simpler views with fewer column formulas perform better than long, complicated and computationally intensive views. The topics in this section deal with designing views for the Web. The first few topics review the standard options for displaying views. Later topics offer suggestions about improving view look and feel. Understand view Action buttons As you work with views on the Web, keep in mind that Action buttons are always placed at the top of the page regardless of how the view is displayed on the Web (standard view, view contents as HTML) and regardless of whether or not a view template is used. Unless the Action Bar is displayed with the Java applet, Action buttons are rendered in a basic HTML table; a horizontal rule separates the Action buttons from the rest of the form. Bear in mind that the Action buttons are functionally connected to but stylistically independent of the view and view template design elements that display lower on the form. Use Domino-generated default views When you look at a view on the Web, the view consists only of column headings and data rows. Everything else on the page (below any Action buttons) is contained on a view template form. You can create view templates in your design, or you can let Domino provide a default form. If Domino supplies the view template, the rendered page is fairly basic. Below the Action buttons and the horizontal rule, standard navigational hotspots are displayed; these navigational hotspots are repeated below the view. Expand and Collapse hotspots are included to support categorized views and views that include documents in response hierarchies. The view title displays below the top set of navigational hotspots, and then the view itself appears. If you supply a view template for a view, you must design the navigational hotspots, view title, and other design elements that may be required. View contents are rendered as an HTML table with columns that expand or contract depending upon the width of cell contents. If view columns enable sorting, then sorting arrows appear to the right of column headings. Here is an example of how Domino displays a view by default on the Web: In this example, clicking the blue underscored values in the left-most Last Name column opens the corresponding documents. By default, values in the left-most column are rendered as URL links, but any other column—or several columns—can serve this purpose. To change which column values are clickable, enable or disable the Show values in this column as links option on the Advanced tab of Column Properties: Typically a title, subject, or another unique document attribute is enabled as the link. Out of the box default views are a good choice for rapid prototyping or for one-time needs where look-and-feel are less important. Beyond designing the views, nothing else is required. Domino merges the views with HTML tags and a little JavaScript to produce fully functional pages. On the down side, what you see is what you get. Default views are stylistically uninspiring, and there is not a lot that can be done with them beyond some modest Designer-applied styling. Many Designer-applied styles, such as column width, are not translated to the Web. Still, some visual improvements can be made. In this example, the font characteristics are modified, and an alternate row background color is added: Include HTML tags to enhance views Some additional styling and behavior can be coded into standard views using HTML tags and CSS rules. Here is how this is done: In this example, <font> tags surround the column Title. Note the square brackets that identify the tags as HTML: Tags can also be inserted into column value formulas: "[<font color='darkblue'>]" + ContactLast + "[</font>]" When viewed with a browser, the new colors are displayed as expected. But when the view is opened in Notes, it looks like this: (Move the mouse over the image to enlarge it.) The example illustrates how to code the additional tags, but frankly the same effects can be achieved using Designer-applied formatting, so there is no real gain here. The view takes longer to code and the final result is not very reader-friendly when viewed with the Notes client. That being said, there still may be occasions when you want to add HTML tags to achieve a particular result. Here is a somewhat more complicated application of the same technique. This next line of code is added to the Title of a column. Note the use of <sup> and <font> tags. These tags apply only to the message See footnote 1: Last Name[<sup><font color='red'>]See footnote 1[</font></sup>] The result achieves the desired effect: More challenging is styling values in view columns. You do not have access to the <td> or <font> tags that Domino inserts into the page to define table cell contents. But you can add <span> tags around a column value, and then use CSS rules to style the span. Here is what the column formula might look like: "[<span class='column1'>]" + ContactLast + "[</span>]" Here is the CSS rule for the column1 class: .column1 { background-color: #EEE; cursor: default; display: block; font-weight: bold; text-decoration: none; width: 100%; } These declarations change the background color of the cell to a light gray and the pointer to the browser default. The display and width declarations force the span to occupy the width of the table cell. The text underscoring (for the link) is removed and the text is made bold. Without the CSS rule, the view displays as expected: With the CSS rule applied, a different look for the first column is achieved:
Read more
  • 0
  • 0
  • 9010

article-image-getting-started-inkscape
Packt
09 Mar 2011
9 min read
Save for later

Getting Started with Inkscape

Packt
09 Mar 2011
9 min read
Inkscape 0.48 Essentials for Web Designers Use the fascinating Inkscape graphics editor to create attractive layout designs, images, and icons for your website   Vector graphics Vector graphics are made up of paths. Each path is basically a line with a start and end point, curves, angles, and points that are calculated with a mathematical equation. These paths are not limited to being straight—they can be of any shape, size, and even encompass any number of curves. When you combine them, they create drawings, diagrams, and can even help create certain fonts. These characteristics make vector graphics very different than JPEGs, GIFs, or BMP images—all of which are considered rasterized or bitmap images made up of tiny squares which are called pixels or bits. If you magnify these images, you will see they are made up of a grid (bitmaps) and if you keep magnifying them, they will become blurry and grainy as each pixel with bitmap square's zoom level grows larger. Computer monitors also use pixels in a grid. However, they use millions of them so that when you look at a display, your eyes see a picture. In high-resolution monitors, the pixels are smaller and closer together to give a crisper image. How does this all relate to vector-based graphics? Vector-based graphics aren't made up of squares. Since they are based on paths, you can make them larger (by scaling) and the image quality stays the same, lines and edges stay clean, and the same images can be used on items as small as letterheads or business cards or blown up to be billboards or used in high definition animation sequences. This flexibility, often accompanied by smaller file sizes, makes vector graphics ideal—especially in the world of the Internet, varying computer displays, and hosting services for web spaces, which leads us nicely to Inkscape, a tool that can be invaluable for use in web design. What is Inkscape and how can it be used? Inkscape is a free, open source program developed by a group of volunteers under the GNU General Public License (GPL). You not only get a free download but can use the program to create items with it and freely distribute them, modify the program itself, and share that modified program with others. Inkscape uses Scalable Vector Graphics (SVG), a vector-based drawing language that uses some basic principles: A drawing can (and should) be scalable to any size without losing detail A drawing can use an unlimited number of smaller drawings used in any number of ways (and reused) and still be a part of a larger whole SVG and World Wide Web Consortium (W3C) web standards are built into Inkscape which give it a number of features including a rich body of XML (eXtensible Markup Language) format with complete descriptions and animations. Inkscape drawings can be reused in other SVG-compliant drawing programs and can adapt to different presentation methods. It has support across most web browsers (Firefox, Chrome, Opera, Safari, Internet Explorer). When you draw your objects (rectangles, circles, and so on.), arbitrary paths, and text in Inkscape, you also give them attributes such as color, gradient, or patterned fills. Inkscape automatically creates a web code (XML) for each of these objects and tags your images with this code. If need be, the graphics can then be transformed, cloned, and grouped in the code itself, Hyperlinks can even be added for use in web browsers, multi-lingual scripting (which isn't available in most commercial vector-based programs) and more—all within Inkscape or in a native programming language. It makes your vector graphics more versatile in the web space than a standard JPG or GIF graphic. There are still some limitations in the Inkscape program, even though it aims to be fully SVG compliant. For example, as of version 0.48 it still does not support animation or SVG fonts—though there are plans to add these capabilities into future versions. Installing Inkscape Inkscape is available for download for Windows, Macintosh, Linux, or Solaris operating systems. To run on the Mac OS X operating system, it typically runs under X11—an implementation of the X Window System software that makes it possible to run X11-based applications in Mac OS X. The X11 application has shipped with the Mac OS X since version 10.5. When you open Inkscape on a Mac, it will first open X11 and run Inkscape within that program. Loss of some shortcut key options will occur but all functionality is present using menus and toolbars. Let's briefly go over how to download and install Inkscape: Go to the official Inkscape website at: http://www.inkscape.org/ and download the appropriate version of the software for your computer. For the Mac OS X Leopard software, you will also need to download an additional application. It is the X11 application package 2.4.0 or greater from this website: http://xquartz.macosforge.org/trac/wiki/X112.4.0. Once downloaded, double-click the X11-2.4.0.DMG package first. It will open another folder with the X11 application installer. Double-click that icon to be prompted through an installation wizard. Double-click the downloaded Inkscape installation package to start the installation. For the Mac OS, a DMG file is downloaded. Double-click on it and then drag and drop the Inkscape package to the Application Folder. For any Windows device, an .EXE file is downloaded. Double-click that file to start and complete the installation. For Linux-based computers, there are a number of distributions available. Be sure to download and install the correct installation package for your system. Now find the Inkscape icon in the Application or Programs folders to open the program. Double-click the Inkscape icon and the program will automatically open to the main screen. The basics of the software When you open Inkscape for the first time, you'll see that the main screen and a new blank document opened are ready to go. If you are using a Macintosh computer, Inkscape opens within the X11 application and may take slightly longer to load. The Inkscape interface is based on the GNOME UI standard which uses visual cues and feedback for any icons. For example: Hovering your mouse over any icon displays a pop-up description of the icon. If an icon has a dark gray border, it is active and can be used. If an icon is grayed out, it is not currently available to use with the current selection. All icons that are in execution mode (or busy) are covered by a dark shadow. This signifies that the application is busy and won't respond to any edit request. There is a Notification Display on the main screen that displays dynamic help messages to key shortcuts and basic information on how to use the Inkscape software in its current state or based on what objects and tools are selected. Main screen basics Within the main screen there is the main menu, a command, snap and status bar, tool controls, and a palette bar. Main menu You will use the main menu bar the most when working on your projects. This is the central location to find every tool and menu item in the program—even those found in the visual-based toolbars below it on the screen. When you select a main menu item the Inkscape dialog displays the icon, a text description, and shortcut key combination for the feature. This can be helpful while first learning the program—as it provides you with easier and often faster ways to use your most commonly used functions of the program. Toolbars Let's take a general tour of the tool bars seen on this main screen. We'll pay close attention to the tools we'll use most frequently. If you don't like the location of any of the toolbars, you can also make them as floating windows on your screen. This lets you move them from their pre-defined locations and move them to a location of your liking. To move any of the toolbars, from their docking point on the left side, click and drag them out of the window. When you click the upper left button to close the toolbar window, it will be relocated back into the screen. Command bar This toolbar represents the common and most frequently used commands in Inkscape: As seen in the previous screenshot you can create a new document, open an existing one, save, print, cut, paste, zoom, add text, and much more. Hover your mouse over each icon for details on its function. By default, when you open Inkscape, this toolbar is on the right side of the main screen. Snap bar Also found vertically on the right side of the main screen, this toolbar is designed to help with the Snap to features of Inkscape. It lets you easily align items (snap to guides), force objects to align to paths (snap to paths), or snap to bounding boxes and edges. Tool controls This toolbar's options change depending on which tool you have selected in the toolbox (described in the next section). When you are creating objects, it provides you all the detailed options—size, position, angles, and attributes specific to the tool you are currently using. By default, it looks like the following screenshot: (Move the mouse over the image to enlarge.) You have options to select/deselect objects within a layer, rotate or mirror objects, adjust object locations on the canvas, and scaling options and much more. Use it to define object properties when they are selected on the canvas. Toolbox bar You'll use the tool box frequently. It contains all of the main tools for creating objects, selecting and modifying objects, and drawing. To select a tool, click the icon. If you double-click a tool, you can see that tool's preferences (and change them). If you are new to Inkscape, there are a couple of hints about creating and editing text. The Text tool (A icon) in the Tool Box shown above is the only way of creating new text on the canvas. The T icon shown in the Command Bar is used only while editing text that already exists on the canvas.  
Read more
  • 0
  • 0
  • 8985

article-image-selinux-highly-secured-web-hosting-python-based-web-applications
Packt
21 Oct 2009
10 min read
Save for later

SELinux - Highly Secured Web Hosting for Python-based Web Applications

Packt
21 Oct 2009
10 min read
When contemplating the security of a web application, there are several attack vectors that you must consider. An outsider may attack the operating system by planting a remote exploit, exercising insecure operating system settings, or brandishing some other method of privilege escalation. Or, the outsider may attack other sites contained in the same server without escalating privileges. (Note that this particular discussion does not touch upon the conditions under which an attack steals data from a single site. Instead, I'm focusing on the ability to attack different applications on the same server.) With hosts providing space for large numbers of PHP-based sites, security can be difficult as the httpd daemon traditionally runs under the same Unix user for all sites. In order to prevent these kinds of attacks from occurring, you need to concentrate on two areas: Preventing the site from reading or modifying the data of another site, and Preventing the site from escalating privileges to tamper with the operating system and bypass user-based restrictions. There are two toolboxes you use to accomplish this. In the first case, you need to find a way to run all of your sites under different Linux users. This allows the traditional Linux filesystem security model to provide protection against a hacked site attacking other sites on the same server. In the second case, you need to find a way to prevent a privilege escalation to begin with and barring that, prevent damage to the operating system should an escalation occur. Let's first take a look at a method to run different sites under different users. The Python web framework provides several versatile methods by which applications can run. There are three common methods: first, using Python's built-in http server; second, running the script as a CGI application; and third, using mod_python under Apache (similar to what mod_perl and mod_php do). These methods have various disadvantages: respectively, a lack of scalability, performance issues due to CGI application loading, and the aforementioned “all sites under one user” problem. To provide a scalable, secure, high-performance framework, you can turn to a relatively new delivery method: mod_wsgi. This Apache module, created by Graham Dumpleton, provides several methods by which you can run Python applications. In this case, we'll be focusing on the “daemon” mode of mod_wsgi. Much like mod_python, the daemon mode of mod_wsgi embeds a Python interpreter (and the requisite script) into a httpd instance. Much like with mod_python, you can configure sites based on mod_wsgi to appear at various locations in the virtual directory tree and under different virtual servers. You can also configure the number and behavior of child daemons on a per-site basis. However, there is one important difference: with mod_wsgi, you can configure each httpd instance to run as a different Linux user. During operation, the main httpd instance dispatches requests to the already-running mod_wsgi children, producing performance results that rival mod_python. But most importantly, since each httpd instance is running under a different Linux user, you can apply Linux security mechanisms to different sites running on one server. Once you have your sites running on a per-user basis, you should next turn your attention to preventing privilege escalation and protecting the operating system. By default, the Targeted mode of SELinux provided by RedHat Enterprise Linux 5 (and its free cousins such as CentOS) provides strong protection against intrusions from httpd-based applications. Because of this, you will need to configure SELinux to allow access to resources such as databases and files that reside outside of the normal httpd directories. To illustrate these concepts, I'll guide you as you install a Trac instance under mod_wsgi. The platform is CentOS 5. As a side note, it's highly recommended that you perform the installation and SELinux debugging in a XEN instance so that your environment only contains the software that is needed. The sidebar explains how to easily install the environment that was originally used to perform this exercise, and I will assume that is your primary environment. There are a few steps that require the use of a C compiler – namely, the installation of Trac – and I'll guide you through migrating these packages to your XEN-based test environment. Installing Trac In this example, you'll use a standard installation of Trac. Following the instructions provided in the URL in the Resource section, begin by installing Trac 0.10.4 with ClearSilver 0.10.5 and SilverCity 0.9.7. (Note that with many Python web applications such as Trac and Django, “installing” the application means that you're actually installing the libraries necessary for Python to run the application. You'll need to run a script to create the actual site.) Next, create a PostgreSQL user and database on a different machine. If you are using XEN for your development machine, you can use a PostgreSQL database running in your main DOM0 instance; all we are concerned with is that the PostgreSQL instance is accessed on a different machine over the network. (Note that MySQL will also work in this example, but SQLite will not. In this case, we need a database engine that is accessed over the network, not as a disk file.) After that's done, you'll need to create an actual Trac site. Create a directory under /opt, such as /opt/trac. Next, run the trac_admin command and enter the information prompted. trac-admin /opt/trac initenv Installing mod_wsgi You can find mod_wsgi at the source listed in the Resources. After you make sure the httpd_devel package is installed, installing mod_wsgi is as simple as extracting the tarball and issuing the normal ./configure and 'make install' commands. Running Trac under mod_wsgi If you look under /opt/trac, you'll notice two directories: one labeled apache, and one with the label of the project that you assigned when you installed this instance of Trac. You'll start by creating an application script in the apache directory. The application script is listed in Listing 1. Listing 1: /opt/trac/apache/trac.wsgi #!/usr/bin/python import sys sys.stdout = sys.stderr import os os.environ['TRAC_ENV'] = '/opt/trac/test_proj' import trac.web.main application = trac.web.main.dispatch_request (Note the 'sys.stdout = sys.stderr' line. This is necessary due to the way WSGI handles communications between the Python script and the httpd instance. If there is any code in the script that prints to STDOUT (such as debug messages), then the httpd instance can crash.) After creating the application script, you'll modify httpd.conf to load the wsgi module and set up the Trac application. After the LoadModule lines, insert a line for mod_wsgi: LoadModule wsgi_module modules/mod_wsgi.so Next, go to the bottom of httpd.conf and insert the text in Listing 2. This text configures the wsgi module for one particular site; it can be used under the default httpd configuration as well as under VirtualHost directives. Listing 2: Excerpt from httpd.conf: WSGIDaemonProcess trac user=trac_user group=trac_user threads=25 WSGIScriptAlias /trac /opt/trac/apache/trac.wsgi WSGIProcessGroup trac WSGISocketPrefix run/wsgi <Directory /opt/trac/apache> WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> Note the WSGIScriptAlias identifier. The /trac keyword (first parameter) specifies where in the directory tree the application will exist. With this configuration, If you go to your server's root address, you'll see the default CenOS splash page. If you add /trac after the address, you'll hit your Trac instance. Save the httpd.conf file. Finally, add a Linux user called trac_user. It is important that this user should not have login privileges. When the root httpd instance runs and encounters the WSGIDaemonProcess directive noted above, it will fork itself as the user specified in the directive; the fork will then load Python and the indicated script.     Securing Your Site In this section, I'll focus on the two areas noted in the introduction: User based security and SELinux. I will touch briefly on the theory of SELinux and explain the nuts and bolts of this particular implementation in more depth. I highly recommend that you read the RedHat Enterprise Linux Deployment Guide for the particulars about how RedHat implements SELinux. As with all activities involving some risk, if you plan to implement these methods, you should retain the services of a qualified security consultant to advise you about your particular situation. Setting up the user-based security is not difficult. Because the HTTPD instance containing Python and the Trac instance will run under the Trac user, you can safely set everything under /opt/trac/test_project for read and execute (for directories) for user and none for group/all. By doing this, you will isolate this site from other sites and users on the system. Now, let's configure SELinux. First, you should verify that your system is running the proper Policy and Mode. On your development system, you'll be using the Targeted policy in its Permissive mode. If you choose to move your Python applications to a production machine, you would run under the Targeted policy, in the Enforcing mode. The Targeted policy is limited to protecting the most popular network services without making the system so complex as to prevent user-level work from being done. It is the only mode that ships with RedHat 5, and by extension, CentOS 5. In Permissive mode, SELinux policy violations are trapped and sent to the audit log, but the behavior is allowed. In enforcing mode, the violation is trapped and the behavior is not allowed. To verify the Mode, run the Security Level Configuration tool from the Administration menu. The SELinux tab, shown in Figure 1, allows you to adjust the Mode. After you have verified that SELinux is running in Permissive mode, you need to do two things. First, you need to change the Type of the files under /opt/trac. Second, you need to allow Trac to connect to the Postgres database that you configured when you installed Trac. First, you need to tweak the SELinux file types attached to the files in your Trac instance. These file types dictate what processes are allowed to access them. For example, /etc/shadow has a very restrictive 'shadow' type that only allows a few applications to read and write it. By default, SELinux expects web-based applications – indeed, anything using Apache – to reside under /var/www. Files created under this directory have the SELinux Type httpd_sys_content_t. When you created the Trac instance under /opt/trac, the files were created as type usr_t. Figure 2 shows the difference between these labels To properly label the files under /opt, issue the following commands as root: cd /optchcon -R -t httpd_user_content_t trac/ After the file types are configured, there is one final step to do: allow Trac to connect to PostgreSQL. In its default state, SELinux disallows outbound network connections for the httpd type. To allow database connections, issue the following command: setsebool -P httpd_can_network_connect_db=1 In this case, we are using the -P option to make this setting persistent. If you omit this option, then the setting will be reset to its default state upon the next reboot. After the setsebool command has been run, start HTTPD by issuing the following command: /sbin/service httpd start If you visit the url http://127.0.0.1/trac, you should see the Trac screen such as that in Figure 3.    
Read more
  • 0
  • 0
  • 8965
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-hello-tdd
Packt
14 Sep 2016
6 min read
Save for later

Hello TDD!

Packt
14 Sep 2016
6 min read
In this article by Gaurav Sood, the author of the book Scala Test-Driven Development, tells  basics of Test-Driven Development. We will explore: What is Test-Driven Development? What is the need for Test-Driven Development? Brief introduction to Scala and SBT (For more resources related to this topic, see here.) What is Test-Driven Development? Test-Driven Development or TDD(as it is referred to commonly)is the practice of writing your tests before writing any application code. This consists of the following iterative steps: This process is also referred to asRed-Green-Refactor-Repeat. TDD became more prevalent with the use of agile software development process, though it can be used as easily with any of the Agile's predecessors like Waterfall. Though TDD is not specifically mentioned in agile manifesto (http://agilemanifesto.org), it has become a standard methodology used with agile. Saying this, you can still use agile without using TDD. Why TDD? The need for TDD arises from the fact that there can be constant changes to the application code. This becomes more of a problem when we are using agile development process, as it is inherently an iterative development process. Here are some of the advantages, which underpin the need for TDD: Code quality: Tests on itself make the programmer more confident of their code. Programmers can be sure of syntactic and semantic correctness of their code. Evolving architecture: A purely test-driven application code gives way to an evolving architecture. This means that we do not have to pre define our architectural boundaries and the design patterns. As the application grows so does the architecture. This results in an application that is flexible towards future changes. Avoids over engineering: Tests that are written before the application code define and document the boundaries. These tests also document the requirements and application code. Agile purists normally regard comments inside the code as a smell. According to them your tests should document your code. Since all the boundaries are predefined in the tests, it is hard to write application code, which breaches these boundaries. This however assumes that TDD is following religiously. Paradigm shift: When I had started with TDD, I noticed that the first question I asked myself after looking at the problem was; "How can I solve it?" This however is counterproductive. TDD forces the programmer to think about the testability of the solution before its implementation. To understand how to test a problem would mean a better understanding of the problem and its edge cases. This in turn can result into refinement of the requirements or discovery or some new requirements. Now it had become impossible for me not to think about testability of the problem before the solution. Now the first question I ask myself is; "How can I test it?". Maintainable code: I have always found it easier to work on an application that has historically been test-driven rather than on one that is not. Why? Only because when I make change to the existing code, the existing tests make sure that I do not break any existing functionality. This results in highly maintainable code, where many programmers can collaborate simultaneously. Brief introduction to Scala and SBT Let us look at Scala and SBT briefly. It is assumed that the reader is familiar with Scala and therefore will not go into the depth of it. What is Scala Scala is a general-purpose programming language. Scala is an acronym for Scalable Language. This reflects the vision of its creators of making Scala a language that grows with the programmer's experience of it. The fact that Scala and Java objects can be freely mixed, makes transition from Java to Scala quite easy. Scala is also a full-blown functional language. Unlike Haskell, which is a pure functional language, Scala allows interoperability with Java and support for objectoriented programming. Scala also allows use of both pure and impure functions. Impure functions have side affect like mutation, I/O and exceptions. Purist approach to Scala programming encourages use of pure functions only. Scala is a type-safe JVM language that incorporates both object oriented and functional programming into an extremely concise, logical, and extraordinarily powerful language. Why Scala? Here are some advantages of using Scala: Functional solution to problem is always better: This is my personal view and open for contention. Elimination of mutation from application code allows application to be run in parallelacross hosts and cores without any deadlocks. Better concurrency model: Scala has an actor model that is better than Java's model of locks on thread. Concise code:Scala code is more concise than itsmore verbose cousin Java. Type safety/ static typing: Scala does type checking at compile time. Pattern matching: Case statements in Scala are superpowerful. Inheritance:Mixin traits are great and they definitely reduce code repetition. There are other features of Scala like closure and monads, which will need more understanding of functional language concepts to learn. Scala Build Tool Scala Build Tool (SBT) is a build tool that allows compiling, running, testing, packaging, and deployment of your code. SBT is mostly used with Scala projects, but it can as easily be used for projects in other languages. Here, we will be using SBT as a build tool for managing our project and running our tests. SBT is written in Scala and can use many of the features of Scala language. Build definitions for SBT are also written in Scala. These definitions are both flexible and powerful. SBT also allows use of plugins and dependency management. If you have used a build tool like Maven or Gradlein any of your previous incarnations, you will find SBT a breeze. Why SBT? Better dependency management Ivy based dependency management Only-update-on-request model Can launch REPL in project context Continuous command execution Scala language support for creating tasks Resources for learning Scala Here are few of the resources for learning Scala: http://www.scala-lang.org/ https://www.coursera.org/course/progfun https://www.manning.com/books/functional-programming-in-scala http://www.tutorialspoint.com/scala/index.htm Resources for SBT Here are few of the resources for learning SBT: http://www.scala-sbt.org/ https://twitter.github.io/scala_school/sbt.html Summary In this article we learned what is TDD and why to use it. We also learned about Scala and SBT.  Resources for Article: Further resources on this subject: Overview of TDD [article] Understanding TDD [article] Android Application Testing: TDD and the Temperature Converter [article]
Read more
  • 0
  • 0
  • 8952

article-image-going-mobile-first
Packt
03 Feb 2016
16 min read
Save for later

Going Mobile First

Packt
03 Feb 2016
16 min read
In this article by Silvio Moreto Pererira, author of the book, Bootstrap By Example, we will focus on mobile design and how to change the page layout for different viewports, change the content, and more. In this article, you will learn the following: Mobile first development Debugging for any device The Bootstrap grid system for different resolutions (For more resources related to this topic, see here.) Make it greater Maybe you have asked yourself (or even searched for) the reason of the mobile first paradigm movement. It is simple and makes complete sense to speed up your development pace. The main argument for the mobile first paradigm is that it is easier to make it greater than to shrink it. In other words, if you first make a desktop version of the web page (known as responsive design or mobile last) and then go to adjust the website for a mobile, it has a probability of 99% of breaking the layout at some point, and you will have to fix a lot of things in both the mobile and desktop. On the other hand, if you first create the mobile version, naturally the website will use (or show) less content than the desktop version. So, it will be easier to just add the content, place the things in the right places, and create the full responsiveness stack. The following image tries to illustrate this concept. Going mobile last, you will get a degraded, warped, and crappy layout, and going mobile first, you will get a progressively enhanced, future-friendly awesome web page. See what happens to the poor elephant in this metaphor: Bootstrap and themobile first design At the beginning of Bootstrap, there was no concept of mobile first, so it was made to work for designing responsive web pages. However, with the version 3 of the framework, the concept of mobile first was very solid in the community. For doing this, the whole code of the scaffolding system was rewritten to become mobile first from the start. They decided to reformulate how to set up the grid instead of just adding mobile styles. This made a great impact on compatibility between versions older than 3, but was crucial for making the framework even more popular. To ensure the proper rendering of the page, set the correct viewport at the <head> tag: <meta name="viewport" content="width=device-width,   initial-scale=1"> How to debug different viewports in the browser Here, you will learn how to debug different viewports using the Google Chrome web browser. If you already know this, you can skip this section, although it might be important to refresh the steps for this. In the Google Chrome browser, open the Developer tools option. There are many ways to open this menu: Right-click anywhere on the page and click on the last option called Inspect. Go in the settings (the sandwich button on the right-hand side of the address bar), click on More tools, and finally on Developer tools. The shortcut to open it is Ctrl (cmd for OS X users) + Shift + I. F12 in Windows also works (Internet Explorer legacy…). With Developer tools, click on the mobile phone to the left of a magnifier, as showed in the following image: It will change the display of the viewport to a certain device, and you can also set a specific network usage to limit the data bandwidth. Chrome will show a message telling you that for proper visualization, you may need to reload the page to get the correct rendering: For the next image, we have activated the Device mode for an iPhone 5 device. When we set this viewport, the problems start to appear because we did not make the web page with the mobile first methodology. Bootstrap scaffolding for different devices Now that we know more about mobile first development and its important role in Bootstrap starting from version 3, we will cover Bootstrap usage for different devices and viewports. For doing this, we must apply the column class for the specific viewport, for example, for medium displays, we use the .col-md-*class. The following table presents the different classes and resolutions that will be applied for each viewport class:   Extra small devices (Phones < 544 px / 34 em) Small devices (Tablets ≥ 544 px / 34 em and < 768 px / 48 em) Medium devices (Desktops ≥ 768 px /48 em < 900px / 62 em) Large devices (Desktops ≥ 900 px / 62 em < 1200px 75 em) Extra large devices (Desktops ≥ 1200 px / 75 em) Grid behavior Horizontal lines at all times Collapse at start and fit column grid Container fixed width Auto 544px or 34rem 750px or 45rem 970px or 60rem 1170px or 72.25rem Class prefix .col-xs-* .col-sm-* .col-md-* .col-lg-* .col-xl-* Number of columns 12 columns Column fixed width Auto ~ 44px or 2.75 rem ~ 62px or 3.86 rem ~ 81px or 5.06 rem ~ 97px or 6.06 rem Mobile and extra small devices To exemplify the usage of Bootstrap scaffolding in mobile devices, we will have a predefined web page and want to adapt it to mobile devices. We will be using the Chrome mobile debug tool with the device, iPhone 5. You may have noticed that for small devices, Bootstrap just stacks each column without referring for different rows. In the layout, some of the Bootstrap rows may seem fine in this visualization, although the one in the following image is a bit strange as the portion of code and image are not in the same line, as it supposed to be: To fix this, we need to add the class column's prefix for extra small devices, which is .col-xs-*, where * is the size of the column from 1 to 12. Add the .col-xs-5 class and .col-xs-7 for the columns of this respective row. Refresh the page and you will see now how the columns are put side by side: <div class="row">   <!-- row 3 -->   <div class="col-md-3 col-xs-5">     <pre>&lt;p&gt;I love programming!&lt;/p&gt;       &lt;p&gt;This paragraph is on my landing page&lt;/p&gt;       &lt;br/&gt;       &lt;br/&gt;       &lt;p&gt;Bootstrap by example&lt;/p&gt;     </pre>   </div>   <div class="col-md-9 col-xs-7">     <img src="imgs/center.png" class="img-responsive">   </div> </div> Although the image of the web browser is too small on the right, it would be better if it was a vertical stretched image, such as a mobile phone. (What a coincidence!) To make this, we need to hide the browser image in extra small devices and display an image of the mobile phone. Add the new mobile image below the existing one as follows. You will see both images stacked up vertically in the right column: <img src="imgs/center.png" class="img-responsive"> <img src="imgs/mobile.png" class="img-responsive"> Then, we need to use the new concept of availability classes present in Bootstrap. We need to hide the browser image and display the mobile image just for this kind of viewport, which is extra small. For this, add the .hidden-xs class to the browser image and add the .visible-xs class to the mobile image: <div class="row">   <!-- row 3 -->   <div class="col-md-3 col-xs-5">     <pre>&lt;p&gt;I love programming!&lt;/p&gt;       &lt;p&gt;This paragraph is on my landing page&lt;/p&gt;       &lt;br/&gt;       &lt;br/&gt;       &lt;p&gt;Bootstrap by example&lt;/p&gt;     </pre>   </div>   <div class="col-md-9 col-xs-7">     <img src="imgs/center.png" class="img-responsive hidden-xs">     <img src="imgs/mobile.png" class="img-responsive visible-xs">   </div> </div> Now this row seems nice! With this, the browser image was hidden in extra small devices and the mobile image was shown for this viewport in question. The following image shows the final display of this row: Moving on, the next Bootstrap .row contains a testimonial surrounded by two images. It would be nicer if the testimonial appeared first and both images were displayed after it, splitting the same row, as shown in the following image. For this, we will repeat almost the same techniques presented in the last example: The first change is to hide the Bootstrap image using the .hidden-xs class. After this, create another image tag with the Bootstrap image in the same column of the PACKT image. The final code of the row should be as follows: <div class="row">   <div class="col-md-3 hidden-xs">     <img src="imgs/bs.png" class="img-responsive">   </div>   <div class="col-md-6 col-xs-offset-1 col-xs-11">     <blockquote>       <p>Lorem ipsum dolor sit amet, consectetur         adipiscing elit. Integer posuere erat a ante.</p>       <footer>Testimonial from someone at         <cite title="Source Title">Source Title</cite></footer>     </blockquote>   </div>   <div class="col-md-3 col-xs-7">     <img src="imgs/packt.png" class="img-responsive">   </div>   <div class="col-xs-5 visible-xs">     <img src="imgs/bs.png" class="img-responsive">   </div> </div> We did plenty of things now; all the changes are highlighted. The first is the .hidden-xs class in the first column of the Bootstrap image, which hid the column for this viewport. Afterward, in the testimonial, we changed the grid for the mobile, adding a column offset with size 1 and making the testimonial fill the rest of the row with the .col-xs-11 class. Lastly, like we said, we want to split both images from PACKT and Bootstrap in the same row. For this, make the first image column fill seven columns with the .col-xs-7 class. The other image column is a little more complicated. As it is visible just for mobile devices, we add the .col-xs-5 class, which will make the element span five columns in extra small devices. Moreover, we hide the column for other viewports with the .visible-xs class. As you can see, this row has more than 12 columns (one offset, 11 testimonials, seven PACKT images, and five Bootstrap images). This process is called column wrapping and happens when you put more than 12 columns in the same row, so the groups of extra columns will wrap to the next lines. Availability classes Just like .hidden-*, there are the .visible-*-*classes for each variation of the display and column from 1 to 12. There is also a way to change the display of the CSS property using the .visible-*-* class, where the last * means block, inline, or inline-block. Use this to set the proper visualization for different visualizations. The following image shows you the final result of the changes. Note that we made the testimonial appear first, with one column of offset, and both images appear below it: Tablets and small devices Completing the mobile visualization devices, we move on to tablets and small devices, which are devices from 544 px (34 em) to 768 px (48 em). Most of this kind of devices are tablets or old desktops monitors. To work with this example, we are using the iPad mini in the portrait position. For this resolution, Bootstrap handles the rows just as in extra small devices by stacking up each one of the columns and making them fill the total width of the page. So, if we do not want this to happen, we have to set the column fill for each element with the .col-sm-* class manually. If you see now how our example is presented, there are two main problems. The first one is that the heading is in separate lines, whereas they could be in the same line. For this, we just need to apply the grid classes for small devices with the .col-sm-6 class for each column, splitting them into equal sizes: <div class="row">   <div class="col-md-offset-4 col-md-4 col-sm-6">     <h3>       Some text with <small>secondary text</small>     </h3>   </div>   <div class="col-md-4 col-sm-6">     <h3>       Add to your favorites       <small>         <kbd class="nowrap"><kbd>ctrl</kbd> + <kbd>d</kbd></kbd>       </small>     </h3>   </div> </div> The result should be as follows: The second problem in this viewport is again the testimonial row! Due to the classes that we have added for the mobile viewport, the testimonial now has an offset column and different column span. We must add the classes for small devices and make this row with the Bootstrap image on the left, the testimonial in the middle, and the PACKT image on the right: <div class="row">   <div class="col-md-3 hidden-xs col-sm-3">     <img src="imgs/bs.png" class="img-responsive">   </div>   <div class="col-md-6 col-xs-offset-1 col-xs-11     col-sm-6 col-sm-offset-0">     <blockquote>       <p>Lorem ipsum dolor sit amet, consectetur         adipiscing elit. Integer posuere erat a ante.</p>       <footer>Testimonial from someone at         <cite title="Source Title">Source Title</cite></footer>     </blockquote>   </div>   <div class="col-md-3 col-xs-7 col-sm-3">     <img src="imgs/packt.png" class="img-responsive">   </div>   <div class="col-xs-5 hidden-sm hidden-md hidden-lg">     <img src="imgs/bs.png" class="img-responsive">   </div> </div> As you can see, we had to reset the column offset in the testimonial column. It happened because it kept the offset that we had added for extra small devices. Moreover, we are just ensuring that the image columns had to fill just three columns with the .col-sm-3 classes in both the images. The result of the row is as follows: Everything else seems fine! These viewports were easier to set up. See how Bootstrap helps us a lot? Let's move on to the final viewport, which is a desktop or large devices. Desktop and large devices Last but not least, we enter the grid layout for a desktop and large devices. We skipped medium devices because we coded first for this viewport. Deactivate the Device mode in Chrome and put your page in a viewport with a width larger or equal to 1200 pixels. The grid prefix that we will be using is .col-lg-*, and if you take a look at the page, you will see that everything is well placed and we don't need to make changes! (Although we would like to make some tweaks to make our layout fancier and learn some stuff about the Bootstrap grid.) We want to talk about a thing called column ordering. It is possible to change the order of the columns in the same row by applying the.col-lg-push-* and .col-lg-pull-* classes. (Note that we are using the large devices prefix, but any other grid class prefix can be used.) The .col-lg-push-* class means that the column will be pushed to the right by the * columns, where * is the number of columns pushed. On the other hand, .col-lg-pull-* will pull the column in the left direction by *, where * is the number of columns as well. Let's test this trick in the second row by changing the order of both the columns: <div class="row">   <div class="col-md-offset-4 col-md-4 col-sm-6 col-lg-push-4">     <h3>       Some text with <small>secondary text</small>     </h3>   </div>   <div class="col-md-4 col-sm-6 col-lg-pull-4">     <h3>       Add to your favorites       <small>         <kbd class="nowrap"><kbd>ctrl</kbd> + <kbd>d</kbd></kbd>       </small>     </h3>   </div> </div> We just added the .col-lg-push-4 class to the first column and .col-lg-pull-4 to the other one to get this result. By doing this, we have changed the order of both the columns in the second row, as shown in the following image: Summary In this article, you learned a little about the mobile first development and how Bootstrap can help us in this task. We started from an existing Bootstrap template, which was not ready for mobile visualization, and fixed that. While fixing, we used a lot of Bootstrap scaffolding properties and Bootstrap helpers, making it much easier to fix anything. We did all of this without a single line of CSS or JavaScript; we used only Bootstrap with its inner powers! Resources for Article:   Further resources on this subject: Bootstrap in a Box [article] The Bootstrap grid system [article] Deep Customization of Bootstrap [article]
Read more
  • 0
  • 0
  • 8945

article-image-code-style-django
Packt
17 Jun 2015
16 min read
Save for later

Code Style in Django

Packt
17 Jun 2015
16 min read
In this article written by Sanjeev Jaiswal and Ratan Kumar, authors of the book Learning Django Web Development, this article will cover all the basic topics which you would require to follow, such as coding practices for better Django web development, which IDE to use, version control, and so on. We will learn the following topics in this article: Django coding style Using IDE for Django web development Django project structure This article is based on the important fact that code is read much more often than it is written. Thus, before you actually start building your projects, we suggest that you familiarize yourself with all the standard practices adopted by the Django community for web development. Django coding style Most of Django's important practices are based on Python. Though chances are you already know them, we will still take a break and write all the documented practices so that you know these concepts even before you begin. To mainstream standard practices, Python enhancement proposals are made, and one such widely adopted standard practice for development is PEP8, the style guide for Python code–the best way to style the Python code authored by Guido van Rossum. The documentation says, "PEP8 deals with semantics and conventions associated with Python docstrings." For further reading, please visit http://legacy.python.org/dev/peps/pep-0008/. Understanding indentation in Python When you are writing Python code, indentation plays a very important role. It acts as a block like in other languages, such as C or Perl. But it's always a matter of discussion amongst programmers whether we should use tabs or spaces, and, if space, how many–two or four or eight. Using four spaces for indentation is better than eight, and if there are a few more nested blocks, using eight spaces for each indentation may take up more characters than can be shown in single line. But, again, this is the programmer's choice. The following is what incorrect indentation practices lead to: >>> def a(): ...   print "foo" ...     print "bar" IndentationError: unexpected indent So, which one we should use: tabs or spaces? Choose any one of them, but never mix up tabs and spaces in the same project or else it will be a nightmare for maintenance. The most popular way of indention in Python is with spaces; tabs come in second. If any code you have encountered has a mixture of tabs and spaces, you should convert it to using spaces exclusively. Doing indentation right – do we need four spaces per indentation level? There has been a lot of confusion about it, as of course, Python's syntax is all about indentation. Let's be honest: in most cases, it is. So, what is highly recommended is to use four spaces per indentation level, and if you have been following the two-space method, stop using it. There is nothing wrong with it, but when you deal with multiple third party libraries, you might end up having a spaghetti of different versions, which will ultimately become hard to debug. Now for indentation. When your code is in a continuation line, you should wrap it vertically aligned, or you can go in for a hanging indent. When you are using a hanging indent, the first line should not contain any argument and further indentation should be used to clearly distinguish it as a continuation line. A hanging indent (also known as a negative indent) is a style of indentation in which all lines are indented except for the first line of the paragraph. The preceding paragraph is the example of hanging indent. The following example illustrates how you should use a proper indentation method while writing the code: bar = some_function_name(var_first, var_second,                                            var_third, var_fourth) # Here indentation of arguments makes them grouped, and stand clear from others. def some_function_name(        var_first, var_second, var_third,        var_fourth):    print(var_first) # This example shows the hanging intent. We do not encourage the following coding style, and it will not work in Python anyway: # When vertical alignment is not used, Arguments on the first line are forbidden foo = some_function_name(var_first, var_second,    var_third, var_fourth) # Further indentation is required as indentation is not distinguishable between arguments and source code. def some_function_name(    var_first, var_second, var_third,    var_fourth):    print(var_first) Although extra indentation is not required, if you want to use extra indentation to ensure that the code will work, you can use the following coding style: # Extra indentation is not necessary. if (this    and that):    do_something() Ideally, you should limit each line to a maximum of 79 characters. It allows for a + or – character used for viewing difference using version control. It is even better to limit lines to 79 characters for uniformity across editors. You can use the rest of the space for other purposes. The importance of blank lines The importance of two blank lines and single blank lines are as follows: Two blank lines: A double blank lines can be used to separate top-level functions and the class definition, which enhances code readability. Single blank lines: A single blank line can be used in the use cases–for example, each function inside a class can be separated by a single line, and related functions can be grouped together with a single line. You can also separate the logical section of source code with a single line. Importing a package Importing a package is a direct implication of code reusability. Therefore, always place imports at the top of your source file, just after any module comments and document strings, and before the module's global and constants as variables. Each import should usually be on separate lines. The best way to import packages is as follows: import os import sys It is not advisable to import more than one package in the same line, for example: import sys, os You may import packages in the following fashion, although it is optional: from django.http import Http404, HttpResponse If your import gets longer, you can use the following method to declare them: from django.http import ( Http404, HttpResponse, HttpResponsePermanentRedirect ) Grouping imported packages Package imports can be grouped in the following ways: Standard library imports: Such as sys, os, subprocess, and so on. import reimport simplejson Related third party imports: These are usually downloaded from the Python cheese shop, that is, PyPy (using pip install). Here is an example: from decimal import * Local application / library-specific imports: This included the local modules of your projects, such as models, views, and so on. from models import ModelFoofrom models import ModelBar Naming conventions in Python/Django Every programming language and framework has its own naming convention. The naming convention in Python/Django is more or less the same, but it is worth mentioning it here. You will need to follow this while creating a variable name or global variable name and when naming a class, package, modules, and so on. This is the common naming convention that we should follow: Name the variables properly: Never use single characters, for example, 'x' or 'X' as variable names. It might be okay for your normal Python scripts, but when you are building a web application, you must name the variable properly as it determines the readability of the whole project. Naming of packages and modules: Lowercase and short names are recommended for modules. Underscores can be used if their use would improve readability. Python packages should also have short, all-lowercase names, although the use of underscores is discouraged. Since module names are mapped to file names (models.py, urls.py, and so on), it is important that module names be chosen to be fairly short as some file systems are case insensitive and truncate long names. Naming a class: Class names should follow the CamelCase naming convention, and classes for internal use can have a leading underscore in their name. Global variable names: First of all, you should avoid using global variables, but if you need to use them, prevention of global variables from getting exported can be done via __all__, or by defining them with a prefixed underscore (the old, conventional way). Function names and method argument: Names of functions should be in lowercase and separated by an underscore and self as the first argument to instantiate methods. For classes or methods, use CLS or the objects for initialization. Method names and instance variables: Use the function naming rules—lowercase with words separated by underscores as necessary to improve readability. Use one leading underscore only for non-public methods and instance variables. Using IDE for faster development There are many options on the market when it comes to source code editors. Some people prefer full-fledged IDEs, whereas others like simple text editors. The choice is totally yours; pick up whatever feels more comfortable. If you already use a certain program to work with Python source files, I suggest that you stick to it as it will work just fine with Django. Otherwise, I can make a couple of recommendations, such as these: SublimeText: This editor is lightweight and very powerful. It is available for all major platforms, supports syntax highlighting and code completion, and works well with Python. The editor is open source and you can find it at http://www.sublimetext.com/ PyCharm: This, I would say, is most intelligent code editor of all and has advanced features, such as code refactoring and code analysis, which makes development cleaner. Features for Django include template debugging (which is a winner) and also quick documentation, so this look-up is a must for beginners. The community edition is free and you can sample a 30-day trial version before buying the professional edition. Setting up your project with the Sublime text editor Most of the examples that we will show you in this book will be written using Sublime text editor. In this section, we will show how to install and set up the Django project. Download and installation: You can download Sublime from the download tab of the site www.sublimetext.com. Click on the downloaded file option to install. Setting up for Django: Sublime has a very extensive plug-in ecosystem, which means that once you have downloaded the editor, you can install plug-ins for adding more features to it. After successful installation, it will look like this: Most important of all is Package Control, which is the manager for installing additional plugins directly from within Sublime. This will be your only manual installation of the package. It will take care of the rest of the package installation ahead. Some of the recommendations for Python development using Sublime are as follows: Sublime Linter: This gives instant feedback about the Python code as you write it. It also has PEP8 support; this plugin will highlight in real time the things we discussed about better coding in the previous section so that you can fix them.   Sublime CodeIntel: This is maintained by the developer of SublimeLint. Sublime CodeIntel have some of advanced functionalities, such as directly go-to definition, intelligent code completion, and import suggestions.   You can also explore other plugins for Sublime to increase your productivity. Setting up the pycharm IDE You can use any of your favorite IDEs for Django project development. We will use pycharm IDE for this book. This IDE is recommended as it will help you at the time of debugging, using breakpoints that will save you a lot of time figuring out what actually went wrong. Here is how to install and set up pycharm IDE for Django: Download and installation: You can check the features and download the pycharm IDE from the following link: http://www.jetbrains.com/pycharm/ Setting up for Django: Setting up pycharm for Django is very easy. You just have to import the project folder and give the manage.py path, as shown in the following figure: The Django project structure The Django project structure has been changed in the 1.6 release version. Django (django-admin.py) also has a startapp command to create an application, so it is high time to tell you the difference between an application and a project in Django. A project is a complete website or application, whereas an application is a small, self-contained Django application. An application is based on the principle that it should do one thing and do it right. To ease out the pain of building a Django project right from scratch, Django gives you an advantage by auto-generating the basic project structure files from which any project can be taken forward for its development and feature addition. Thus, to conclude, we can say that a project is a collection of applications, and an application can be written as a separate entity and can be easily exported to other applications for reusability. To create your first Django project, open a terminal (or Command Prompt for Windows users), type the following command, and hit Enter: $ django-admin.py startproject django_mytweets This command will make a folder named django_mytweets in the current directory and create the initial directory structure inside it. Let's see what kind of files are created. The new structure is as follows: django_mytweets/// django_mytweets/ manage.py This is the content of django_mytweets/: django_mytweets/ __init__.py settings.py urls.py wsgi.py Here is a quick explanation of what these files are: django_mytweets (the outer folder): This folder is the project folder. Contrary to the earlier project structure in which the whole project was kept in a single folder, the new Django project structure somehow hints that every project is an application inside Django. This means that you can import other third party applications on the same level as the Django project. This folder also contains the manage.py file, which include all the project management settings. manage.py: This is utility script is used to manage our project. You can think of it as your project's version of django-admin.py. Actually, both django-admin.py and manage.py share the same backend code. Further clarification about the settings will be provided when are going to tweak the changes. Let's have a look at the manage.py file: #!/usr/bin/env python import os import sys if __name__ == "__main__":    os.environ.setdefault("DJANGO_SETTINGS_MODULE", "django_mytweets.settings")    from django.core.management import   execute_from_command_line    execute_from_command_line(sys.argv) The source code of the manage.py file will be self-explanatory once you read the following code explanation. #!/usr/bin/env python The first line is just the declaration that the following file is a Python file, followed by the import section in which os and sys modules are imported. These modules mainly contain system-related operations. import os import sys The next piece of code checks whether the file is executed by the main function, which is the first function to be executed, and then loads the Django setting module to the current path. As you are already running a virtual environment, this will set the path for all the modules to the path of the current running virtual environment. if __name__ == "__main__":    os.environ.setdefault("DJANGO_SETTINGS_MODULE",     "django_mytweets.settings") django_mytweets/ ( Inner folder) __init__.py Django projects are Python packages, and this file is required to tell Python that this folder is to be treated as a package. A package in Python's terminology is a collection of modules, and they are used to group similar files together and prevent naming conflicts. settings.py: This is the main configuration file for your Django project. In it, you can specify a variety of options, including database settings, site language(s), what Django features need to be enabled, and so on. By default, the database is configured to use SQLite Database, which is advisable to use for testing purposes. Here, we will only see how to enter the database in the settings file; it also contains the basic setting configuration, and with slight modification in the manage.py file, it can be moved to another folder, such as config or conf. To make every other third-party application a part of the project, we need to register it in the settings.py file. INSTALLED_APPS is a variable that contains all the entries about the installed application. As the project grows, it becomes difficult to manage; therefore, there are three logical partitions for the INSTALLED_APPS variable, as follows: DEFAULT_APPS: This parameter contains the default Django installed applications (such as the admin) THIRD_PARTY_APPS: This parameter contains other application like SocialAuth used for social authentication LOCAL_APPS: This parameter contains the applications that are created by you url.py: This is another configuration file. You can think of it as a mapping between URLs and the Django view functions that handle them. This file is one of Django's more powerful features. When we start writing code for our application, we will create new files inside the project's folder. So, the folder also serves as a container for our code. Now that you have a general idea of the structure of a Django project, let's configure our database system. Summary We prepared our development environment in this article, created our first project, set up the database, and learned how to launch the Django development server. We learned the best way to write code for our Django project and saw the default Django project structure. Resources for Article: Further resources on this subject: Tinkering Around in Django JavaScript Integration [article] Adding a developer with Django forms [article] So, what is Django? [article]
Read more
  • 0
  • 0
  • 8916

article-image-learning-how-manage-records-visualforce
Packt
29 Sep 2016
7 min read
Save for later

Learning How to Manage Records in Visualforce

Packt
29 Sep 2016
7 min read
In this article by Keir Bowden, author of the book, Visualforce Development Cookbook - Second Edition we will cover the following styling fields and table columns as per requirement One of the common use cases for Visualforce pages is to simplify, streamline, or enhance the management of sObject records. In this article, we will use Visualforce to carry out some more advanced customization of the user interface—redrawing the form to change available picklist options, or capturing different information based on the user's selections. (For more resources related to this topic, see here.) Styling fields as required Standard Visualforce input components, such as <apex:inputText />, can take an optional required attribute. If set to true, the component will be decorated with a red bar to indicate that it is required, and form submission will fail if a value has not been supplied, as shown in the following screenshot: In the scenario where one or more inputs are required and there are additional validation rules, for example, when one of either the Email or Phone fields is defined for a contact, this can lead to a drip feed of error messages to the user. This is because the inputs make repeated unsuccessful attempts to submit the form, each time getting slightly further in the process. Now, we will create a Visualforce page that allows a user to create a contact record. The Last Name field is captured through a non-required input decorated with a red bar identical to that created for required inputs. When the user submits the form, the controller validates that the Last Name field is populated and that one of the Email or Phone fields is populated. If any of the validations fail, details of all errors are returned to the user. Getting ready This topic makes use of a controller extension so this must be created before the Visualforce page. How to do it… Navigate to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the RequiredStylingExt.cls Apex class from the code downloaded into the Apex Class area. Click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Click on the New button. Enter RequiredStyling in the Label field. Accept the default RequiredStyling that is automatically generated for the Name field. Paste the contents of the RequiredStyling.page file from the code downloaded into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Locate the entry for the RequiredStyling page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the RequiredStyling page to create a new contact record: https://<instance>/apex/RequiredStyling. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com. Clicking on the Save button without populating any of the fields results in the save failing with a number of errors: The Last Name field is constructed from a label and text input component rather than a standard input field, as an input field would enforce the required nature of the field and stop the submission of the form: <apex:pageBlockSectionItem > <apex:outputLabel value="Last Name"/> <apex:outputPanel id="detailrequiredpanel" layout="block" styleClass="requiredInput"> <apex:outputPanel layout="block" styleClass="requiredBlock" /> <apex:inputText value="{!Contact.LastName}"/> </apex:outputPanel> </apex:pageBlockSectionItem> The required styles are defined in the Visualforce page rather than relying on any existing Salesforce style classes to ensure that if Salesforce changes the names of its style classes, this does not break the page. The controller extension save action method carries out validation of all fields and attaches error messages to the page for all validation failures: if (String.IsBlank(cont.name)) { ApexPages.addMessage(new ApexPages.Message( ApexPages.Severity.ERROR, 'Please enter the contact name')); error=true; } if ( (String.IsBlank(cont.Email)) && (String.IsBlank(cont.Phone)) ) { ApexPages.addMessage(new ApexPages.Message( ApexPages.Severity.ERROR, 'Please supply the email address or phone number')); error=true; } Styling table columns as required When maintaining records that have required fields through a table, using regular input fields can end up with an unsightly collection of red bars striped across the table. Now, we will create a Visualforce page to allow a user to create a number of contact records via a table. The contact Last Name column header will be marked as required, rather than the individual inputs. Getting ready This topic makes use of a custom controller, so this will need to be created before the Visualforce page. How to do it… First, create the custom controller by navigating to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the RequiredColumnController.cls Apex class from the code downloaded into the Apex Class area. Click on the Save button. Next, create a Visualforce page by navigating to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Click on the New button. Enter RequiredColumn in the Label field. Accept the default RequiredColumn that is automatically generated for the Name field. Paste the contents of the RequiredColumn.page file from the code downloaded into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Locate the entry for the RequiredColumn page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the RequiredColumn page: https://<instance>/apex/RequiredColumn. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com. The Last Name column header is styled in red, indicating that this is a required field. Attempting to create a record where only First Name is specified results in an error message being displayed against the Last Name input for the particular row: The Visualforce page sets the required attribute on the inputField components in the Last Name column to false, which removes the red bar from the component: <apex:column > <apex:facet name="header"> <apex:outputText styleclass="requiredHeader" value="{!$ObjectType.Contact.fields.LastName.label}" /> </apex:facet> <apex:inputField value="{!contact.LastName}" required="false"/> </apex:column> The Visualforce page custom controller Save method checks if any of the fields in the row are populated, and if this is the case, it checks that the last name is present. If the last name is missing from any record, an error is added. If an error is added to any record, the save does not complete: if ( (!String.IsBlank(cont.FirstName)) || (!String.IsBlank(cont.LastName)) ) { // a field is defined - check for last name if (String.IsBlank(cont.LastName)) { error=true; cont.LastName.addError('Please enter a value'); } String.IsBlank() is used as this carries out three checks at once: to check that the supplied string is not null, it is not empty, and it does not only contain whitespace. Summary Thus in this article we successfully mastered the techniques to style fields and table columns as per the custom needs. Resources for Article: Further resources on this subject: Custom Components in Visualforce [Article] Visualforce Development with Apex [Article] Using Spring JMX within Java Applications [Article]
Read more
  • 0
  • 0
  • 8891
article-image-posting-your-wordpress-blog
Packt
16 Oct 2009
12 min read
Save for later

Posting on Your WordPress Blog

Packt
16 Oct 2009
12 min read
The central activity you'll be doing with your blog is adding posts. A post is like an article in a magazine; it's got a title, content, and an author (you). If a blog is like an online diary, then every post is an entry in that diary. A blog post also has a lot of other information attached to it, such as a date and categories. In this article, you will learn how to create a new post and what kind of information you can attach to it. Adding a simple post Let's review the process of adding a simple post to your blog. Whenever you want to do maintenance on your WordPress website, you have to start by logging in to the WP Admin (WordPress Administration panel) for your site. To get to the admin panel, just point your web browser to http://yoursite.com/wp-admin. Remember that if you have installed WordPress in a subfolder (for example, blog), then your URL has to include the subfolder (that is, http://yoursite.com/blog/wp-admin). When you first log into the WP Admin, you'll be at the Dashboard. The Dashboard has a lot of information on it. The very top bar, which I'll refer to as the top menu, is mostly dark grey and on the left, of course, is the main menu. The top menu and the main menu exist on every page within the WP Admin. The main section on the right contains information for the current page you're on. In this case, we're on the Dashboard. It contains boxes that have a variety of information about your blog, and about WordPress in general. The quickest way to get to the Add New Post page at any time is to click on the New Post link at the top of the page in the top bar (top menu). This is the Add New Post page: To quickly add a new post to your site, all you have to do is: Type in a title into the text field under Add New Post (for example, Making Lasagne). Type the text of your post in the content box. Note that the default view is Visual, but you actually have a choice of the HTML view as well. Click on the Publish button, which is at the far right. Note that you can choose to save a draft or view a preview of your post. In the following image, the title field, the content box, and the Publish button of the Add New Post page are highlighted: Once you click on the Publish button, you have to wait while WordPress performs its magic. You'll see yourself still on the Edit Post page, but now the following message has appeared telling you that your post was published and giving you a link to View post: If you go to the front page of your site, you'll see that your new post has been added at the top (newest posts are always at the top): Common post options Now that we've reviewed the basics of adding a post, let's investigate some of the other options on the Add New Post page. In this section we'll look at the most commonly used options, and in the next section we'll look at the more advanced options. Categories and tags Categories and tags are two similar types of information that you can add to a blog post. We use them to organize the information in your blog by topic and content (rather than just by, say, date), and to help visitors find what they are looking for on your blog. Categories are primarily used for structural organizing. They can be hierarchical. A relatively busy blog will probably have at least 10 categories, but probably not more than 15 or 20. Each post in this blog will likely have one to four categories assigned to it. For example, a blog about food might have these categories: Cooking Adventures, In The Media, Ingredients, Opinion, Recipes Found, Recipes Invented, and Restaurants. Tags are primarily used as shorthand for describing the topics covered in a particular blog post. A relatively busy blog will have anywhere from 15 to 30 tags in use. Each post in this blog will likely have three to ten tags assigned to it. For example, a post on the food blog about a recipe for butternut squash soup may have these tags: soup, vegetarian, autumn, hot, easy. Let's add a new post to the blog. This time, we'll give it not only a title and content, but also tags and categories. When adding tags, just type your list of tags into the Tags box on the right, separated by commas: Then click on the Add button. The tags you just typed in will appear below the text field with little xs next to them. You can click on an x to delete a tag. Once you've used some tags in your blog, you'll be able to click on the Choose from the most popular tags link in this box so that you can easily re-use tags. Categories work a bit differently than tags. Once you get your blog going, you'll usually just check the boxes next to existing categories in the Categories box. In this case, as we don't have any existing categories, we'll have to add one or two. In the Categories box on the right, click on the + Add New Category link. Type your category into the text field and click on the Add button. Your new category will show up in the list, already checked. Look at the following screenshot: If in the future you want to add a category that needs a parent category, select Parent category from the pull-down menu before clicking on the Add button. If you want to manage more details about your categories, move them around, rename them, assign parent categories, and assign descriptive text. You can do this on the Categories page, which we'll see in detail later in this article. Now fill in your title and content here: Click on the Publish button and you're done. When you look at the front page of your site, you'll see your new post on the top, your new category in the sidebar, and the tags and category (that you chose for your post) listed under the post itself: Adding an image to a post You may often want to have an image show up in your post. WordPress makes this very easy. Let's add an image to the post we just created. You can click on Edit underneath your post on the front page of your site to get there quickly. Alternatively, go back to the WP Admin, open Posts in the main menu, and then click on Edit underneath your new post. To add an image to a post, first you'll need to have that image on your computer. Before you get ready to upload an image, make sure that your image is optimized for the Web. Huge files will be uploaded slowly and slow down the process of viewing your site. You can re-size and optimize images using software such as GIMP or Photoshop. For the example in this article, I have used a photo of butternut squash soup that I have taken from the website where I got the recipe, and I know it's on the desktop of my computer. Once you have a picture on your computer and know where it is, follow these steps to add the photo to your blog post: Click on the little photo icon, which is next to the word Upload/Insert and below the box for the title: In the box that appears, click on the Select Files button and browse to your image. Then click on Open and watch the uploader bar. When it's done, you'll have a number of fields you can fill in: The only fields that are important right now are Title, Alignment, and Size. Title is a description for the image, Alignment will tell the image whether to have text wrap around it, and Size is the size of the image. As you can see, I've chosen the Right alignment and the Thumbnail size. Now click on Insert into Post. This box will disappear, and your image will show up in the post on the edit page itself: Now click on the Update Post button and go look at the front page of your site again. There's your image! You may be wondering about those image sizes. What if you want bigger or smaller thumbnails? You can set the pixel dimensions of your uploaded images and other preferences by opening Settings in the main menu and then clicking on Media. This takes you to the Media Settings page: Here you can specify the size of the uploaded images for: Thumbnail Medium Large If you change the dimensions on this page and click on the Save Changes button, only images you upload in the future will be affected. Images you've already uploaded to the site will have had their thumbnail, medium, and large versions created already using the old dimensions. Using the Visual editor versus the HTML editor WordPress comes with a Visual editor, otherwise known as a WYSIWYG editor (pronounced wissy-wig, which stands for What You See Is What You Get). This is the default editor for typing and editing your posts. If you're comfortable with HTML, you may prefer to write and edit your posts using the HTML editor—particularly useful if you want to add special content or styling. To switch from the rich text editor to the HTML editor, click on the HTML tab next to the Visual tab at the top of the content box: You'll see your post in all its raw HTML glory and you'll get a new set of buttons that lets you quickly bold and italicize text as well as add link code, image code, and so on. You can make changes and swap back and forth between the tabs to see the result. If you want the HTML tab to be your default editor, you can change this on your Profile page. Navigate to Users | Your Profile, and select the Disable the visual editor when writing checkbox. Drafts, timestamps, and managing posts There are three additional, simple but common, items I'd like to cover in this section: drafts, timestamps, and managing posts. Drafts WordPress gives you the option to save a draft of your post so that you don't have to publish it right away but can still save your work. If you've started writing a post and want to save a draft, just click on the Save Draft button at the right (in the Publish box), instead of the Publish button. Even if you don't click on the Save Draft button, WordPress will attempt to save a draft of your post for you about once a minute. You'll see this in the area just below the content box. The text will say Saving Draft... and then the time of the last draft saved: At this point, after a manual save or an auto-save, you can leave the Edit Post page and do other things. You'll be able to access all of your draft posts from the Dashboard or from the Edit Posts page. Timestamps WordPress will also let you alter the timestamp of your post. This is useful if you are writing a post today that you wish you'd published yesterday, or if you're writing a post in advance and don't want it to show up until the right day. The default timestamp will always be set to the moment you publish your post. To change it, just find the Publish box and click on the Edit link (next to the calendar icon and Publish immediately), and fields will show up with the current date and time for you to change: Change the details, click on the OK button, and then Publish your post (or save a draft). Managing posts If you want to see a list of your posts so that you can easily skim and manage them, you just need to go to the Edit Posts page in the WP Admin by navigating to Posts in the main menu. You'll see a detailed list of your posts, as seen in the next screenshot: There are so many things you can do on this page! You can: Choose a post to edit—click on a post title and you'll go back to the main Edit Post page Quick-edit a post—click on the Quick Edit link for any post and new options will appear right in the list, which will let you edit the title, timestamp, categories, tags, and more Delete one or more posts—click on the checkboxes next to the posts you want to delete, choose Delete from the Bulk Actions drop-down menu at the bottom, and click on the Apply button Bulk edit posts—choose Edit from the Bulk Actions menu at the bottom, click on the Apply button, and you'll be able to assign categories and tags to multiple posts, as well as edit other information about them You can experiment with the other links and options on this page. Just click on the pull-down menus and links, and see what happens.
Read more
  • 0
  • 0
  • 8869

article-image-optimizing-magento-performance-using-hhvm
Packt
16 May 2014
5 min read
Save for later

Optimizing Magento Performance — Using HHVM

Packt
16 May 2014
5 min read
(For more resources related to this topic, see here.) HipHop Virtual Machine As we can write a whole book (or two) about HHVM, we will just give the key ideas here. HHVM is a virtual machine that will translate any called PHP file into a HHVM byte code in the same spirit as the Java or .NET virtual machine. HHVM transforms your PHP code into a lower level language that is much faster to execute. Of course, the transformation time (compiling) does cost a lot of resources, therefore, HHVM is shipped with a cache mechanism similar to APC. This way, the compiled PHP files are stored and reused when the original file is requested. With HHVM, you keep the PHP flexibility and ease in writing, but you now have a performance like that of C++. Hear the words of the HHVM team at Facebook: "HHVM (aka the HipHop Virtual Machine) is a new open-source virtual machine designed for executing programs written in PHP. HHVM uses a just-in-time compilation approach to achieve superior performance while maintaining the flexibility that PHP developers are accustomed to. To date, HHVM (and its predecessor HPHPc) has realized over a 9x increase in web request throughput and over a 5x reduction in memory consumption for Facebook compared with the Zend PHP 5.2 engine + APC. HHVM can be run as a standalone webserver (in other words, without the Apache webserver and the "modphp" extension). HHVM can also be used together with a FastCGI-based webserver, and work is in progress to make HHVM work smoothly with Apache." If you think this is too good to be true, you're right! Indeed, HHVM have a major drawback. HHVM was and still is focused on the needs of Facebook. Therefore, you might have a bad time trying to use your custom made PHP applications inside it. Nevertheless, this opportunity to speed up large PHP applications has been seen by talented developers who improve it, day after day, in order to support more and more framework. As our interest is in Magento, I will introduce you Daniel Sloof who is a developer from Netherland. More interestingly, Daniel has done (and still does) an amazing work at adapting HHVM for Magento. Here are the commands to install Daniel Sloof's version of HHVM for Magento: $ sudo apt-get install git $ git clone https://github.com/danslo/hhvm.git $ sudo chmod +x configure_ubuntu_12.04.sh $ sudo ./configure_ubuntu_12.04.sh $ sudo CMAKE_PREFIX_PATH=`pwd`/.. make If you thought that the first step was long, you will be astonished by the time required to actually build HHVM. Nevertheless, the wait is definitely worth it. The following screenshot shows how your terminal will look for the next hour or so: Create a file named hhvm.hdf under /etc/hhvm and write the following code inside: Server { Port = 80 SourceRoot = /var/www/_MAGENTO_HOME_ } Eval { Jit = true } Log { Level = Error UseLogFile = true File = /var/log/hhvm/error.log Access { * { File = /var/log/hhvm/access.log Format = %h %l %u %t \"%r\" %>s %b } } } VirtualHost { * { Pattern = .* RewriteRules { dirindex { pattern = ^/(.*)/$ to = $1/index.php qsa = true } } } } StaticFile { FilesMatch { * { pattern = .*\.(dll|exe) headers { * = Content-Disposition: attachment } } } Extensions { css = text/css gif = image/gif html = text/html jpe = image/jpeg jpeg = image/jpeg jpg = image/jpeg png = image/png tif = image/tiff tiff = image/tiff txt = text/plain } } Now, run the following command: $ sudo ./hhvm –mode daemon –config /etc/hhvm.hdf The hhvm executable is under hhvm/hphp/hhvm. Is all of this worth it? Here's the response: ab -n 100 -c 5 http://192.168.0.105192.168.0.105/index.php/furniture/livingroom.html Server Software: Server Hostname: 192.168.0.105192.168.0.105 Server Port: 80 Document Path: /index.php/furniture/living-room.html Document Length: 35552 bytes Concurrency Level: 5 Time taken for tests: 4.970 seconds Requests per second: 20.12 [#/sec] (mean) Time per request: 248.498 [ms] (mean) Time per request: 49.700 [ms] (mean, across all concurrent requests) Transfer rate: 707.26 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 2 12.1 0 89 Processing: 107 243 55.9 243 428 Waiting: 107 242 55.9 242 427 Total: 110 245 56.7 243 428 We literally reach a whole new world here. Indeed, our Magento instance is six times faster than after all our previous optimizations and about 20 times faster than the default Magento served by Apache. The following graph shows the performances: Our Magento instance is now flying at lightening speed, but what are the drawbacks? Is it still as stable as before? All the optimization we did so far, are they still effective? Can we go even further? In what follows, we present a non-exhaustive list of answers: Fancy extensions and modules may (and will) trigger HHVM incompatibilities. Magento is a relatively old piece of software and combining it with a cutting edge technology such as HHVM can have some unpredictable (and undesirable) effects. HHVM is so complex that fixing a Magento-related bug requires a lot of skill and dedication. HHVM takes care of PHP, not of cache mechanisms or the accelerator we installed before. Therefore, APC, memcached, and Varnish are still running and helping to improve our performances. If you become addicted to performances, HHVM is now supporting Fast-CGI through Nginx and Apache. You can find out more about that at: http://www.hhvm.com/blog/1817/fastercgi-with-hhvm. Summary In this article, we successfully used the HipHop Virtual Machine (HHVM) from Facebook to serve Magento. This improvement optimizes our Magento performance incredibly (20 times faster), that is, the time required initially was 110 seconds while now it is less then 5 seconds. Resources for Article: Further resources on this subject: Magento: Exploring Themes [article] Getting Started with Magento Development [article] Enabling your new theme in Magento [article] Call Send SMS Add to Skype You'll need Skype CreditFree via Skype
Read more
  • 0
  • 0
  • 8858

article-image-showing-cached-content-first-then-networks
Packt
03 Aug 2016
9 min read
Save for later

Showing cached content first then networks

Packt
03 Aug 2016
9 min read
In this article by Sean Amarasinghe, the author of the book, Service Worker Development Cookbook, we are going to look at the methods that enable us to control cached content by creating a performance art event viewer web app. If you are a regular visitor to a certain website, chances are that you may be loading most of the resources, like CSS and JavaScript files, from your cache, rather than from the server itself. This saves us necessary bandwidth for the server, as well as requests over the network. Having the control over which content we deliver from the cache and server is a great advantage. Server workers provide us with this powerful feature by having programmatic control over the content. (For more resources related to this topic, see here.) Getting ready To get started with service workers, you will need to have the service worker experiment feature turned on in your browser settings. Service workers only run across HTTPS. How to do it... Follow these instructions to set up your file structure. Alternatively, you can download the files from the following location: https://github.com/szaranger/szaranger.github.io/tree/master/service-workers/03/02/ First, we must create an index.html file as follows: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Cache First, then Network</title> <link rel="stylesheet" href="style.css"> </head> <body> <section id="events"> <h1><span class="nyc">NYC</span> Events TONIGHT</h1> <aside> <img src="hypecal.png" /> <h2>Source</h2> <section> <h3>Network</h3> <input type="checkbox" name="network" id="network- disabled-checkbox"> <label for="network">Disabled</label><br /> <h3>Cache</h3> <input type="checkbox" name="cache" id="cache- disabled-checkbox"> <label for="cache">Disabled</label><br /> </section> <h2>Delay</h2> <section> <h3>Network</h3> <input type="text" name="network-delay" id="network-delay" value="400" /> ms <h3>Cache</h3> <input type="text" name="cache-delay" id="cache- delay" value="1000" /> ms </section> <input type="button" id="fetch-btn" value="FETCH" /> </aside> <section class="data connection"> <table> <tr> <td><strong>Network</strong></td> <td><output id='network-status'></output></td> </tr> <tr> <td><strong>Cache</strong></td> <td><output id='cache-status'></output><td> </tr> </table> </section> <section class="data detail"> <output id="data"></output> </section> <script src="index.js"></script> </body> </html> Create a CSS file called style.css in the same folder as the index.html file. You can find the source code in the following location on GitHub: https://github.com/szaranger/szaranger.github.io/blob/master/service-workers/03/02/style.css Create a JavaScript file called index.js in the same folder as the index.html file. You can find the source code in the following location on GitHub: https://github.com/szaranger/szaranger.github.io/blob/master/service-workers/03/02/index.js Open up a browser and go to index.html. First we are requesting data from the network with the cache enabled. Click on the Fetch button. If you click fetch again, the data has been retrieved first from cache, and then from the network, so you see duplicate data. (See the last line is same as the first.) Now we are going to select the Disabled checkbox under the Network label, and click the Fetch button again, in order to fetch data only from the cache. Select the Disabled checkbox under the Network label, as well as the Cache label, and click the Fetch button again. How it works... In the index.js file, we are setting a page specific name for the cache, as the caches are per origin based, and no other page should use the same cache name: var CACHE_NAME = 'cache-and-then-network'; If you inspect the Resources tab of the development tools, you will find the cache inside the Cache Storage tab. If we have already fetched network data, we don't want the cache fetch to complete and overwrite the data that we just got from the network. We use the networkDataReceived flag to let the cache fetch callbacks to know if a network fetch has already completed: var networkDataReceived = false; We are storing elapsed time for both network and cache in two variables: var networkFetchStartTime; var cacheFetchStartTime; The source URL for example is pointing to a file location in GitHub via RawGit: var SOURCE_URL = 'https://cdn.rawgit.com/szaranger/ szaranger.github.io/master/service-workers/03/02/events'; If you want to set up your own source URL, you can easily do so by creating a gist, or a repository, in GitHub, and creating a file with your data in JSON format (you don't need the .json extension). Once you've done that, copy the URL of the file, head over to https://rawgit.com, and paste the link there to obtain another link with content type header as shown in the following screenshot: Between the time we press the Fetch button, and the completion of receiving data, we have to make sure the user doesn't change the criteria for search, or press the Fetch button again. To handle this situation, we disable the controls: function clear() { outlet.textContent = ''; cacheStatus.textContent = ''; networkStatus.textContent = ''; networkDataReceived = false; } function disableEdit(enable) { fetchButton.disabled = enable; cacheDelayText.disabled = enable; cacheDisabledCheckbox.disabled = enable; networkDelayText.disabled = enable; networkDisabledCheckbox.disabled = enable; if(!enable) { clear(); } } The returned data will be rendered to the screen in rows: function displayEvents(events) { events.forEach(function(event) { var tickets = event.ticket ? '<a href="' + event.ticket + '" class="tickets">Tickets</a>' : ''; outlet.innerHTML = outlet.innerHTML + '<article>' + '<span class="date">' + formatDate(event.date) + '</span>' + ' <span class="title">' + event.title + '</span>' + ' <span class="venue"> - ' + event.venue + '</span> ' + tickets + '</article>'; }); } Each item of the events array will be printed to the screen as rows. The function handleFetchComplete is the callback for both the cache and the network. If the disabled checkbox is checked, we are simulating a network error by throwing an error: var shouldNetworkError = networkDisabledCheckbox.checked, cloned; if (shouldNetworkError) { throw new Error('Network error'); } Because of the reason that request bodies can only be read once, we have to clone the response: cloned = response.clone(); We place the cloned response in the cache using cache.put as a key value pair. This helps subsequent cache fetches to find this update data: caches.open(CACHE_NAME).then(function(cache) { cache.put(SOURCE_URL, cloned); // cache.put(URL, response) }); Now we read the response in JSON format. Also, we make sure that any in-flight cache requests will not be overwritten by the data we have just received, using the networkDataReceived flag: response.json().then(function(data) { displayEvents(data); networkDataReceived = true; }); To prevent overwriting the data we received from the network, we make sure only to update the page in case the network request has not yet returned: result.json().then(function(data) { if (!networkDataReceived) { displayEvents(data); } }); When the user presses the fetch button, they make nearly simultaneous requests of the network and the cache for data. This happens on a page load in a real world application, instead of being the result of a user action: fetchButton.addEventListener('click', function handleClick() { ... } We start by disabling any user input while the network fetch requests are initiated: disableEdit(true); networkStatus.textContent = 'Fetching events...'; networkFetchStartTime = Date.now(); We request data with the fetch API, with a cache busting URL, as well as a no-cache option in order to support Firefox, which hasn't implemented the caching options yet: networkFetch = fetch(SOURCE_URL + '?cacheBuster=' + now, { mode: 'cors', cache: 'no-cache', headers: headers }) In order to simulate network delays, we wait before calling the network fetch callback. In situations where the callback errors out, we have to make sure that we reject the promise we received from the original fetch: return new Promise(function(resolve, reject) { setTimeout(function() { try { handleFetchComplete(response); resolve(); } catch (err) { reject(err); } }, networkDelay); }); To simulate cache delays, we wait before calling the cache fetch callback. If the callback errors out, we make sure that we reject the promise we got from the original call to match: return new Promise(function(resolve, reject) { setTimeout(function() { try { handleCacheFetchComplete(response); resolve(); } catch (err) { reject(err); } }, cacheDelay); }); The formatDate function is a helper function for us to convert the date format we receive in the response into a much more readable format on the screen: function formatDate(date) { var d = new Date(date), month = (d.getMonth() + 1).toString(), day = d.getDate().toString(), year = d.getFullYear(); if (month.length < 2) month = '0' + month; if (day.length < 2) day = '0' + day; return [month, day, year].join('-'); } If you consider a different date format, you can shuffle the position of the array in the return statement to your preferred format. Summary In this article, we have learned how to control cached content by creating a performance art event viewer web app. Resources for Article: Further resources on this subject: AngularJS Web Application Development Cookbook [Article] Being Offline [Article] Consuming Web Services using Microsoft Dynamics AX [Article]
Read more
  • 0
  • 0
  • 8853
article-image-administrating-mysql-server-phpmyadmin
Packt
13 Oct 2010
8 min read
Save for later

Administrating the MySQL Server with phpMyAdmin

Packt
13 Oct 2010
8 min read
  Mastering phpMyAdmin 3.3.x for Effective MySQL Management A complete guide to get started with phpMyAdmin 3.3 and master its features The best introduction to phpMyAdmin available Written by the project leader of phpMyAdmin, and improved over several editions A step-by-step tutorial for manipulating data with phpMyAdmin Learn to do things with your MySQL database and phpMyAdmin that you didn't know were possible! Managing users and their privileges The Privileges subpage (visible only if we are logged in as a privileged user) contains dialogs to manage MySQL user accounts. It also contains dialogs to manage privileges on the global, database, and table levels. This subpage is hierarchical. For example, when editing a user's privileges, we can see the global privileges as well as the database-specific privileges. We can then go deeper to see the table-specific privileges for this database-user combination. The user overview The first page displayed when we enter the Privileges subpage is called User verview. This shows all user accounts and a summary of their global privileges, as shown in the next screenshot: From this page, we can: Edit a user's privileges, via the Edit link for this user Use the checkboxes to remove users, via the Remove selected users dialog Access the page when the Add a new User dialog is available The displayed users' list has columns with the following characteristics: Privileges reload At the bottom of User Overview, the following message is displayed: Note: phpMyAdmin gets the users' privileges directly from MySQL's privilege tables. The content of these tables may differ from the privileges the server uses, if they have been changed manually. In this case, you should reload the privileges before you continue. Here, the text reload the privileges is clickable. The effective privileges (the ones against which the server bases its access decisions) are the privileges that are located in the server's memory. Privilege modifications that are made from the User overview page are made both in memory and on disk, in the mysql database. Modifications made directly to the mysql database do not have immediate effect. The reload the privileges operation reads the privileges from the database and makes them effective in memory. Adding a user The Add a new User link opens a dialog for user account creation. First, we see the panel where we'll describe the account itself: The second part of the Add a new User dialog is where we'll specify the user's global privileges, which apply to the server as a whole. Entering the username The User name menu offers two choices. Firstly, we can choose Use text field and enter a username in the box, or we can choose Any user to create an anonymous user (the blank user). Let's choose Use text field and enter bill. Assigning a host value By default, this menu is set to Any host, with % as the host value. The Local choice means "localhost". The Use host table choice (which creates a blank value in the host field) means to look in the mysql.hosts table for database-specific privileges. Choosing Use text field allows us to enter the exact host value we want. Let's choose Local. Setting passwords Even though it's possible to create a user without a password (by selecting the No password option), it's best to have a password. We have to enter it twice (as we cannot see what is entered) to confirm the intended password. A secure password should have more than eight characters, and should contain a mixture of uppercase and lowercase characters, digits, and special characters. Therefore, it's recommended to have phpMyAdmin generate a password—this is possible in JavaScript-enabled browsers. In the Generate Password dialog, clicking on Generate enters a random password (in clear text) on the screen and fills the Password and Re-type input fields with the generated password. At this point, we should note the password so that we can pass it on to the user. Understanding rights for database creation A frequent convention is to assign a user the rights to a database having the same name as this user. To accomplish this, the Database for user section offers the checkbox Create database with same name and grant all privileges. Selecting this checkbox automates the process by creating both the database (if it does not already exist) and the corresponding rights. Please note that, with this method, each user would be limited to one database (user bill, database bill). Another possibility is to allow users to create databases that have the same prefix as their usernames. Therefore, the other choice, Grant all privileges on wildcard name (username_%), performs this function by assigning a wildcard privilege. With this in place, user bill could create the databases bill_test, bill_2, bill_payroll, and so on; phpMyAdmin does not pre-create the databases in this case. Assigning global privileges Global privileges determine the user's access to all databases. Hence, these are sometimes known as "superuser privileges". A normal user should not have any of these privileges unless there is a good reason for this. Of course, if we are really creating a superuser, we will select every global privilege that he or she needs. These privileges are further divided into Data, Structure, and Administration groups. In our example, bill will not have any global privileges. Limiting the resources used We can limit the resources used by this user on this server (for example, the maximum queries per hour). Zero means no limit. We will not impose any resource limits on bill. The following screenshot shows the status of the screen just before hitting Go to create this user's definition (with the remaining fields being set to default): Editing a user profile The page used to edit a user's profile appears after a user's creation, or whenever we click on Edit for a user in the User overview page. There are four sections on this page, each with its own Go button. Hence, each section is operated independently and has a distinct purpose. Editing privileges The section for editing the user's privileges has the same look as the Add a new User dialog, and is used to view and change global privileges. Assigning database-specific privileges In this section, we define the databases to which our user has access, and his or her exact privileges on these databases. As shown in the previous screenshot, we see None because we haven't defined any privileges yet. There are two ways of defining database privileges. First, we can choose one of the existing databases from the drop-down menu: This assigns privileges only for the chosen database. We can also choose Use text field and enter a database name. We could enter a non-existent database name, so that the user can create it later (provided that we give him or her the CREATE privilege in the next panel). We can also use special characters, such as the underscore and the percent sign, for wildcards. For example, entering bill here would enable him to create a bill database, and entering bill% would enable him to create a database with any name that starts with bill. For our example, we will enter bill and then click on Go. The next screen is used to set bill's privileges on the bill database, and create table-specific privileges. To learn more about the meaning of a specific privilege, we can move the mouse over a privilege name (which is always in English), and an explanation about this privilege appears in the current language. We give SELECT, INSERT, UPDATE, DELETE, CREATE, ALTER, INDEX, and DROP privileges to bill on this database. We then click on Go. After the privileges have been assigned, the interface stays at the same place, so that we can refine these privileges further. We cannot assign table-specific privileges for the moment, as the database does not yet exist. To go back to the general privileges page of bill, click on the 'bill'@'localhost' title. This brings us back to the following, familiar page, except for a change in one section: We see the existing privileges (which we can Edit or Revoke) on the bill database for user bill, and we can add privileges for bill on another database. We can also see that bill has no table-specific privileges on the bill database. Changing the password The Change password dialog is part of the Edit user page, and we can use it either to change bill's password or to remove it. Removing the password will enable bill to login without a password. The dialog offers a choice of password hashing options, and it's recommended to keep the default of MySQL 4.1+ hashing. For more details about hashing, please visit http://dev.mysql.com/doc/refman/5.1/en/password-hashing.html
Read more
  • 0
  • 0
  • 8808

article-image-angulars-component-architecture
Packt
07 Jul 2016
11 min read
Save for later

Angular's component architecture

Packt
07 Jul 2016
11 min read
In this article by Gion Kunz, author of the book Mastering Angular 2 Components, has explained the concept of directives from the first version of Angular changed the game in frontend UI frameworks. This was the first time that I felt that there was a simple yet powerful concept that allowed the creation of reusable UI components. Directives could communicate with DOM events or messaging services. They allowed you to follow the principle of composition, and you could nest directives and create larger directives that solely consisted of smaller directives arranged together. Actually, directives were a very nice implementation of components for the browser. (For more resources related to this topic, see here.) In this section, we'll look into the component-based architecture of Angular 2 and how the previous topic about general UI components will fit into Angular. Everything is a component As an early adopter of Angular 2 and while talking to other people about it, I got frequently asked what the biggest difference is to the first version. My answer to this question was always the same. Everything is a component. For me, this paradigm shift was the most relevant change that both simplified and enriched the framework. Of course, there are a lot of other changes with Angular 2. However, as an advocate of component-based user interfaces, I've found that this change is the most interesting one. Of course, this change also came with a lot of architectural changes. Angular 2 supports the idea of looking at the user interface holistically and supporting composition with components. However, the biggest difference to its first version is that now your pages are no longer global views, but they are simply components that are assembled from other components. If you've been following this chapter, you'll notice that this is exactly what a holistic approach to user interfaces demands. No more pages but systems of components. Angular 2 still uses the concept of directives, although directives are now really what the name suggests. They are orders for the browser to attach a given behavior to an element. Components are a special kind of directives that come with a view. Creating a tabbed interface component Let's introduce a new UI component in our ui folder in the project that will provide us with a tabbed interface that we can use for composition. We use what we learned about content projection in order to make this component reusable. We'll actually create two components, one for Tabs, which itself holds individual Tab components. First, let's create the component class within a new tabs/tab folder in a file called tab.js: import {Component, Input, ViewEncapsulation, HostBinding} from '@angular/core'; import template from './tab.html!text'; @Component({ selector: 'ngc-tab', host: { class: 'tabs__tab' }, template, encapsulation: ViewEncapsulation.None }) export class Tab { @Input() name; @HostBinding('class.tabs__tab--active') active = false; } The only state that we store in our Tab component is whether the tab is active or not. The name that is displayed on the tab will be available through an input property. We use a class property binding to make a tab visible. Based on the active flag we set a class; without this, our tabs are hidden. Let's take a look at the tab.html template file of this component: <ng-content></ng-content> This is it already? Actually, yes it is! The Tab component is only responsible for the storage of its name and active state, as well as the insertion of the host element content in the content projection point. There's no additional templating that is needed. Now, we'll move one level up and create the Tabs component that will be responsible for the grouping all the Tab components. As we won't include Tab components directly when we want to create a tabbed interface but use the Tabs component instead, this needs to forward content that we put into the Tabs host element. Let's look at how we can achieve this. In the tabs folder, we will create a tabs.js file that contains our Tabs component code, as follows: import {Component, ViewEncapsulation, ContentChildren} from '@angular/core'; import template from './tabs.html!text'; // We rely on the Tab component import {Tab} from './tab/tab'; @Component({ selector: 'ngc-tabs', host: { class: 'tabs' }, template, encapsulation: ViewEncapsulation.None, directives: [Tab] }) export class Tabs { // This queries the content inside <ng-content> and stores a // query list that will be updated if the content changes @ContentChildren(Tab) tabs; // The ngAfterContentInit lifecycle hook will be called once the // content inside <ng-content> was initialized ngAfterContentInit() { this.activateTab(this.tabs.first); } activateTab(tab) { // To activate a tab we first convert the live list to an // array and deactivate all tabs before we set the new // tab active this.tabs.toArray().forEach((t) => t.active = false); tab.active = true; } } Let's observe what's happening here. We used a new @ContentChildren annotation, in order to query our inserted content for directives that match the type that we pass to the decorator. The tabs property will contain an object of the QueryList type, which is an observable list type that will be updated if the content projection changes. You need to remember that content projection is a dynamic process as the content in the host element can actually change, for example, using the NgFor or NgIf directives. We use the AfterContentInit lifecycle hook, which we've already briefly discussed in the Custom UI elements section of Chapter 2, Ready, Set, Go! This lifecycle hook is called after Angular has completed content projection on the component. Only then we have the guarantee that our QueryList object will be initialized, and we can start working with child directives that were projected as content. The activateTab function will set the Tab components active flag, deactivating any previous active tab. As the observable QueryList object is not a native array, we first need to convert it using toArray() before we start working with it. Let's now look at the template of the Tabs component that we created in a file called tabs.html in the tabs directory: <ul class="tabs__tab-list"> <li *ngFor="let tab of tabs"> <button class="tabs__tab-button" [class.tabs__tab-button--active]="tab.active" (click)="activateTab(tab)">{{tab.name}}</button> </li> </ul> <div class="tabs__l-container"> <ng-content select="ngc-tab"></ng-content> </div> The structure of our Tabs component is as follows. First we render all the tab buttons in an unordered list. After the unordered list, we have a tabs container that will contain all our Tab components that are inserted using content projection and the <ng-content> element. Note that the selector that we use is actually the selector we use for our Tab component. Tabs that are not active will not be visible because we control this using CSS on our Tab component class attribute binding (refer to the Tab component code). This is all that we need to create a flexible and well-encapsulated tabbed interface component. Now, we can go ahead and use this component in our Project component to provide a segregation of our project detail information. We will create three tabs for now where the first one will embed our task list. We will address the content of the other two tabs in a later chapter. Let's modify our Project component template in the project.html file as a first step. Instead of including our TaskList component directly, we now use the Tabs and Tab components to nest the task list into our tabbed interface: <ngc-tabs> <ngc-tab name="Tasks"> <ngc-task-list [tasks]="tasks" (tasksUpdated)="updateTasks($event)"> </ngc-task-list> </ngc-tab> <ngc-tab name="Comments"></ngc-tab> <ngc-tab name="Activities"></ngc-tab> </ngc-tabs> You should have noticed by now that we are actually nesting two components within this template code using content projection, as follows: First, the Tabs component uses content projection to select all the <ngc-tab> elements. As these elements happen to be components too (our Tab component will attach to elements with this name), they will be recognized as such within the Tabs component once they are inserted. In the <ngc-tab> element, we then nest our TaskList component. If we go back to our Task component template, which will be attached to elements with the name ngc-tab, we will have a generic projection point that inserts any content that is present in the host element. Our task list will effectively be passed through the Tabs component into the Tab component. The visual efforts timeline Although the components that we created so far to manage efforts provide a good way to edit and display effort and time durations, we can still improve this with some visual indication. In this section, we will create a visual efforts timeline using SVG. This timeline should display the following information: The total estimated duration as a grey background bar The total effective duration as a green bar that overlays on the total estimated duration bar A yellow bar that shows any overtime (if the effective duration is greater than the estimated duration) The following two figures illustrate the different visual states of our efforts timeline component: The visual state if the estimated duration is greater than the effective duration The visual state if the effective duration exceeds the estimated duration (the overtime is displayed as a yellow bar) Let's start fleshing out our component by creating a new EffortsTimeline Component class on the lib/efforts/efforts-timeline/efforts-timeline.js path: … @Component({ selector: 'ngc-efforts-timeline', … }) export class EffortsTimeline { @Input() estimated; @Input() effective; @Input() height; ngOnChanges(changes) { this.done = 0; this.overtime = 0; if (!this.estimated && this.effective || (this.estimated && this.estimated === this.effective)) { // If there's only effective time or if the estimated time // is equal to the effective time we are 100% done this.done = 100; } else if (this.estimated < this.effective) { // If we have more effective time than estimated we need to // calculate overtime and done in percentage this.done = this.estimated / this.effective * 100; this.overtime = 100 - this.done; } else { // The regular case where we have less effective time than // estimated this.done = this.effective / this.estimated * 100; } } } Our component has three input properties: estimated: This is the estimated time duration in milliseconds effective: This is the effective time duration in milliseconds height: This is the desired height of the efforts timeline in pixels In the OnChanges lifecycle hook, we set two component member fields, which are based on the estimated and effective time: done: This contains the width of the green bar in percentage that displays the effective duration without overtime that exceeds the estimated duration overtime: This contains the width of the yellow bar in percentage that displays any overtime, which is any time duration that exceeds the estimated duration Let's look at the template of the EffortsTimeline component and see how we can now use the done and overtime member fields to draw our timeline. We will create a new lib/efforts/efforts-timeline/efforts-timeline.html file: <svg width="100%" [attr.height]="height"> <rect [attr.height]="height" x="0" y="0" width="100%" class="efforts-timeline__remaining"></rect> <rect *ngIf="done" x="0" y="0" [attr.width]="done + '%'" [attr.height]="height" class="efforts-timeline__done"></rect> <rect *ngIf="overtime" [attr.x]="done + '%'" y="0" [attr.width]="overtime + '%'" [attr.height]="height" class="efforts-timeline__overtime"></rect> </svg> Our template is SVG-based, and it contains three rectangles for each of the bars that we want to display. The background bar that will be visible if there is remaining effort will always be displayed. Above the remaining bar, we conditionally display the done and the overtime bar using the calculated widths from our component class. Now, we can go ahead and include the EffortsTimeline class in our Efforts component. This way our users will have visual feedback when they edit the estimated or effective duration, and it provides them a sense of overview. Let's look into the template of the Efforts component to see how we integrate the timeline: … <ngc-efforts-timeline height="10" [estimated]="estimated" [effective]="effective"> </ngc-efforts-timeline> As we have the estimated and effective duration times readily available in our Efforts component, we can simply create a binding to the EffortsTimeline component input properties: The Efforts component displaying our newly-created efforts timeline component (the overtime of six hours is visualized with the yellow bar) Summary In this article, we learned about the architecture of the components in Angular. We also learned how to create a tabbed interface component and how to create a visual efforts timeline using SVG. Resources for Article: Further resources on this subject: Angular 2.0[article] AngularJS Project[article] AngularJS[article]
Read more
  • 0
  • 0
  • 8756
Modal Close icon
Modal Close icon