Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Server-Side Web Development

406 Articles
article-image-django-and-django-rest-frameworks-build-restful-app
Sugandha Lahoti
29 Mar 2018
12 min read
Save for later

Getting started with Django and Django REST frameworks to build a RESTful app

Sugandha Lahoti
29 Mar 2018
12 min read
In this article, we will learn how to install Django and Django REST framework in an isolated environment. We will also look at the Django folders, files, and configurations, and how to create an app with Django. We will also introduce various command-line and GUI tools that are use to interact with the RESTful Web Services. Installing Django and Django REST frameworks in an isolated environment First, run the following command to install the Django web framework: pip install django==1.11.5 The last lines of the output will indicate that the django package has been successfully installed. The process will also install the pytz package that provides world time zone definitions. Take into account that you may also see a notice to upgrade pip. The next lines show a sample of the four last lines of the output generated by a successful pip installation: Collecting django Collecting pytz (from django) Installing collected packages: pytz, django Successfully installed django-1.11.5 pytz-2017.2 Now that we have installed the Django web framework, we can install Django REST framework. Django REST framework works on top of Django and provides us with a powerful and flexible toolkit to build RESTful Web Services. We just need to run the following command to install this package: pip install djangorestframework==3.6.4 The last lines for the output will indicate that the djangorestframework package has been successfully installed, as shown here: Collecting djangorestframework Installing collected packages: djangorestframework Successfully installed djangorestframework-3.6.4 After following the previous steps, we will have Django REST framework 3.6.4 and Django 1.11.5 installed in our virtual environment. Creating an app with Django Now, we will create our first app with Django and we will analyze the directory structure that Django creates. First, go to the root folder for the virtual environment: 01. In Linux or macOS, enter the following command: cd ~/HillarDjangoREST/01 If you prefer Command Prompt, run the following command in the Windows command line: cd /d %USERPROFILE%\HillarDjangoREST\01 If you prefer Windows PowerShell, run the following command in Windows PowerShell: cd /d $env:USERPROFILE\HillarDjangoREST\01 In Linux or macOS, run the following command to create a new Django project named restful01. The command won't produce any output: python bin/django-admin.py startproject restful01 In Windows, in either Command Prompt or PowerShell, run the following command to create a new Django project named restful01. The command won't produce any output: python Scripts\django-admin.py startproject restful01 The previous command creates a restful01 folder with other subfolders and Python files. Now, go to the recently created restful01 folder. Just execute the following command on any platform: cd restful01 Then, run the following command to create a new Django app named toys within the restful01 Django project. The command won't produce any output: python manage.py startapp toys The previous command creates a new restful01/toys subfolder, with the following files: views.py tests.py models.py apps.py admin.py   init  .py In addition, the restful01/toys folder will have a migrations subfolder with an init  .py Python script. The following diagram shows the folders and files in the directory tree, starting at the restful01 folder with two subfolders - toys and restful01: Understanding Django folders, files, and configurations After we create our first Django project and then a Django app, there are many new folders and files. First, use your favorite editor or IDE to check the Python code in the apps.py file within the restful01/toys folder (restful01\toys in Windows). The following lines show the code for this file: from django.apps import AppConfig class ToysConfig(AppConfig): name = 'toys' The code declares the ToysConfig class as a subclass of the django.apps.AppConfig class that represents a Django application and its configuration. The ToysConfig class just defines the name class attribute and sets its value to 'toys'. Now, we have to add toys.apps.ToysConfig as one of the installed apps in the restful01/settings.py file that configures settings for the restful01 Django project. I built the previous string by concatenating many values as follows: app name + .apps. + class name, which is, toys + .apps. + ToysConfig. In addition, we have to add the rest_framework app to make it possible for us to use Django REST framework. The restful01/settings.py file is a Python module with module-level variables that define the configuration of Django for the restful01 project. We will make some changes to this Django settings file. Open the restful01/settings.py file and locate the highlighted lines that specify the strings list that declares the installed apps. The following code shows the first lines for the settings.py file. Note that the file has more code: """ Django settings for restful01 project. Generated by 'django-admin startproject' using Django 1.11.5. For more information on this file, see https://docs.djangoproject.com/en/1.11/topics/settings/ For the full list of settings and their values, see https://docs.djangoproject.com/en/1.11/ref/settings/ """ import os # Build paths inside the project like this: os.path.join(BASE_DIR, ...) BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath( file ))) # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = '+uyg(tmn%eo+fpg+fcwmm&x(2x0gml8)=cs@$nijab%)y$a*xe' # SECURITY WARNING: don't run with debug turned on in production! DEBUG = True ALLOWED_HOSTS = [] # Application definition INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ]         Add the following two strings to the INSTALLED_APPS strings list and save the changes to the restful01/settings.py file: 'rest_framework' 'toys.apps.ToysConfig' The following lines show the new code that declares the INSTALLED_APPS string list with the added lines highlighted and with comments to understand what each added string means. The code file for the sample is included in the hillar_django_restful_01 folder: INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', # Django REST framework 'rest_framework', # Toys application 'toys.apps.ToysConfig', ] This way, we have added Django REST framework and the toys application to our initial Django project named restful01. Installing tools Now, we will leave Django for a while and we will install many tools that we will use to interact with the RESTful Web Services that we will develop throughout this book. We will use the following different kinds of tools to compose and send HTTP requests and visualize the responses throughout our book: Command-line tools GUI tools Python code Web browser JavaScript code You can use any other application that allows you to compose and send HTTP requests. There are many apps that run on tablets and smartphones that allow you to accomplish this task. However, we will focus our attention on the most useful tools when building RESTful Web Services with Django. Installing Curl We will start installing command-line tools. One of the key advantages of command-line tools is that you can easily run again the HTTP requests again after we have built them for the first time, and we don't need to use the mouse or tap the screen to run requests. We can also easily build a script with batch requests and run them. As happens with any command-line tool, it can take more time to perform the first requests compared with GUI tools, but it becomes easier once we have performed many requests and we can easily reuse the commands we have written in the past to compose new requests. Curl, also known as cURL, is a very popular open source command-line tool and library that allows us to easily transfer data. We can use the curl command-line tool to easily compose and send HTTP requests and check their responses. In Linux or macOS, you can open a Terminal and start using curl from the command line. In Windows, you have two options. You can work with curl in Command Prompt or you can decide to install curl as part of the Cygwin package installation option and execute it from the Cygwin terminal. You can read more about the Cygwin terminal and its installation procedure at: http://cygwin.com/install.html. Windows Powershell includes a curl alias that calls the Invoke-WebRequest command, and therefore, if you want to work with Windows Powershell with curl, it is necessary to remove the curl alias. If you want to use the curl command within Command Prompt, you just need to download and unzip the latest version of the curl download page: https://curl.haxx.se/download.html. Make sure you download the version that includes SSL and SSH. The following screenshot shows the available downloads for Windows. The Win64 - Generic section includes the versions that we can run in Command Prompt or Windows Powershell. After you unzip the .7zip or .zip file you have downloaded, you can include the folder in which curl.exe is included in your path. For example, if you unzip the Win64 x86_64.7zip file, you will find curl.exe in the bin folder. The following screenshot shows the results of executing curl --version on Command Prompt in Windows 10. The --version option makes curl display its version and all the libraries, protocols, and features it supports: Installing HTTPie Now, we will install HTTPie, a command-line HTTP client written in Python that makes it easy to send HTTP requests and uses a syntax that is easier than curl. By default, HTTPie displays colorized output and uses multiple lines to display the response details. In some cases, HTTPie makes it easier to understand the responses than the curl utility. However, one of the great disadvantages of HTTPie as a command-line utility is that it takes more time to load than curl, and therefore, if you want to code scripts with too many commands, you have to evaluate whether it makes sense to use HTTPie. We just need to make sure we run the following command in the virtual environment we have just created and activated. This way, we will install HTTPie only for our virtual environment. Run the following command in the terminal, Command Prompt, or Windows PowerShell to install the httpie package: pip install --upgrade httpie The last lines of the output will indicate that the httpie package has been successfully installed: Collecting httpie Collecting colorama>=0.2.4 (from httpie) Collecting requests>=2.11.0 (from httpie) Collecting Pygments>=2.1.3 (from httpie) Collecting idna<2.7,>=2.5 (from requests>=2.11.0->httpie) Collecting urllib3<1.23,>=1.21.1 (from requests>=2.11.0->httpie) Collecting chardet<3.1.0,>=3.0.2 (from requests>=2.11.0->httpie) Collecting certifi>=2017.4.17 (from requests>=2.11.0->httpie) Installing collected packages: colorama, idna, urllib3, chardet, certifi, requests, Pygments, httpie Successfully installed Pygments-2.2.0 certifi-2017.7.27.1 chardet-3.0.4 colorama-0.3.9 httpie-0.9.9 idna-2.6 requests-2.18.4 urllib3-1.22 Now, we will be able to use the http command to easily compose and send HTTP requests to our future RESTful Web Services build with Django. The following screenshot shows the results of executing http on Command Prompt in Windows 10. HTTPie displays the valid options and indicates that a URL is required: Installing the Postman REST client So far, we have installed two terminal-based or command-line tools to compose and send HTTP requests to our Django development server: cURL and HTTPie. Now, we will start installing Graphical User Interface (GUI) tools. Postman is a very popular API testing suite GUI tool that allows us to easily compose and send HTTP requests, among other features. Postman is available as a standalone app in Linux, macOS, and Windows. You can download the versions of the Postman app from the following URL: https://www.getpostman.com. The following screenshot shows the HTTP GET request builder in Postman: Installing Stoplight Stoplight is a very useful GUI tool that focuses on helping architects and developers to model complex APIs. If we need to consume our RESTful Web Service in many different programming languages, we will find Stoplight extremely helpful. Stoplight provides an HTTP request maker that allows us to compose and send requests and generate the necessary code to make them in different programming languages, such as JavaScript, Swift, C#, PHP, Node, and Go, among others. Stoplight provides a web version and is also available as a standalone app in Linux, macOS, and Windows. You can download the versions of Stoplight from the following URL: http://stoplight.io/. The following screenshot shows the HTTP GET request builder in Stoplight with the code generation at the bottom: Installing iCurlHTTP We can also use apps that can compose and send HTTP requests from mobile devices to work with our RESTful Web Services. For example, we can work with the iCurlHTTP app on iOS devices such as iPad and iPhone: https://itunes.apple.com/us/app/icurlhttp/id611943891. On Android devices, we can work with the HTTP Request app: https://play.google.com/store/apps/details?id=air.http.request&hl=en. The following screenshot shows the UI for the iCurlHTTP app running on an iPad Pro: At the time of writing, the mobile apps that allow you to compose and send HTTP requests do not provide all the features you can find in Postman or command-line utilities. We learnt to set up a virtual environment with Django and Django REST framework and created an app with Django. We looked at Django folders, files, and configurations and installed command-line and GUI tools to interact with the RESTful Web Services. This article is an excerpt from the book, Django RESTful Web Services, written by Gaston C. Hillar. This book serves as an easy guide to build Python RESTful APIs and web services with Django. The code bundle for the article is hosted on GitHub.
Read more
  • 0
  • 0
  • 17835

article-image-understanding-basics-gulp
Packt
19 Jun 2017
15 min read
Save for later

Understanding the Basics of Gulp

Packt
19 Jun 2017
15 min read
In this article written by Travis Maynard, author of the book Getting Started with Gulp - Second Edition, we will take a look at the basics of gulp and how it works. Understanding some of the basic principles and philosophies behind the tool, it's plugin system will assist you as you begin writing your own gulpfiles. We'll start by taking a look at the engine behind gulp and then follow up by breaking down the inner workings of gulp itself. By the end of this article, you will be prepared to begin writing your own gulpfile. (For more resources related to this topic, see here.) Installing node.js and npm As you learned in the introduction, node.js and npm are the engines that work behind the scenes that allow us to operate gulp and keep track of any plugins we decide to use. Downloading and installing node.js For Mac and Windows, the installation is quite simple. All you need to do is navigate over to http://nodejs.org and click on the big green install button. Once the installer has finished downloading, run the application and it will install both node.js and npm. For Linux, there are a couple more steps, but don't worry; with your newly acquired command-line skills, it should be relatively simple. To install node.js and npm on Linux, you'll need to run the following three commands in Terminal: sudo add-apt-repository ppa:chris-lea/node.js sudo apt-get update sudo apt-get install nodejs The details of these commands are outside the scope of this book, but just for reference, they add a repository to the list of available packages, update the total list of packages, and then install the application from the repository we added. Verify the installation To confirm that our installation was successful, try the following command in your command line: node -v If node.js is successfully installed, node -v will output a version number on the next line of your command line. Now, let's do the same with npm: npm -v Like before, if your installation was successful, npm -v should output the version number of npm on the next line. The versions displayed in this screenshot reflect the latest Long Term Support (LTS) release currently available as of this writing. This may differ from the version that you have installed depending on when you're reading this. It's always suggested that you use the latest LTS release when possible. The -v  command is a common flag used by most command-line applications to quickly display their version number. This is very useful to debug version issues while using command-line applications. Creating a package.json file Having npm in our workflow will make installing packages incredibly easy; however, we should look ahead and establish a way to keep track of all the packages (or dependencies) that we use in our projects. Keeping track of dependencies is very important to keep your workflow consistent across development environments. Node.js uses a file named package.json to store information about your project, and npm uses this same file to manage all of the package dependencies your project requires to run properly. In any project using gulp, it is always a great practice to create this file ahead of time so that you can easily populate your dependency list as you are installing packages or plugins. To create the package.json file, we will need to run npm's built in init action using the following command: npm init Now, using the preceding command, the terminal will show the following output: Your command line will prompt you several times asking for basic information about the project, such as the project name, author, and the version number. You can accept the defaults for these fields by simply pressing the Enter key at each prompt. Most of this information is used primarily on the npm website if a developer decides to publish a node.js package. For our purposes, we will just use it to initialize the file so that we can properly add our dependencies as we move forward. The screenshot for the preceding command is as follows: Installing gulp With npm installed and our package.json file created, we are now ready to begin installing node.js packages. The first and most important package we will install is none other than gulp itself. Locating gulp Locating and gathering information about node.js packages is very simple, thanks to the npm registry. The npm registry is a companion website that keeps track of all the published node.js modules, including gulp and gulp plugins. You can find this registry at http://npmjs.org. Take a moment to visit the npm registry and do a quick search for gulp. The listing page for each node.js module will give you detailed information on each project, including the author, version number, and dependencies. Additionally, it also features a small snippet of command-line code that you can use to install the package along with readme information that will outline basic usage of the package and other useful information. Installing gulp locally Before we install gulp, make sure you are in your project's root directory, gulp-book, using the cd and ls commands you learned earlier. If you ever need to brush up on any of the standard commands, feel free to take a moment to step back and review as we progress through the book. To install packages with npm, we will follow a similar pattern to the ones we've used previously. Since we will be covering both versions 3.x and 4.x in this book, we'll demonstrate installing both: For installing gulp 3.x, you can use the following: npm install --save-dev gulp For installing gulp 4.x, you can use the following: npm install --save-dev gulpjs/gulp#4.0 This command is quite different from the 3.x command because this command is installing the latest development release directly from GitHub. Since the 4.x version is still being actively developed, this is the only way to install it at the time of writing this book. Once released, you will be able to run the previous command to without installing from GitHub. Upon executing the command, it will result in output similar to the following: To break this down, let's examine each piece of this command to better understand how npm works: npm: This is the application we are running install: This is the action that we want the program to run. In this case, we are instructing npm to install something in our local folder --save-dev: This is a flag that tells npm to add this module to the dev dependencies list in our package.json file gulp: This is the package we would like to install Additionally, npm has a –-save flag that saves the module to the list of dependencies instead of devDependencies. These dependency lists are used to separate the modules that a project depends on to run, and the modules a project depends on when in development. Since we are using gulp to assist us in development, we will always use the --save-dev flag throughout the book. So, this command will use npm to contact the npm registry, and it will install gulp to our local gulp-book directory. After using this command, you will note that a new folder has been created that is named node_modules. It is where node.js and npm store all of the installed packages and dependencies of your project. Take a look at the following screenshot: Installing gulp-cli globally For many of the packages that we install, this will be all that is needed. With gulp, we must install a companion module gulp-cli globally so that we can use the gulp command from anywhere in our filesystem. To install gulp-cli globally, use the following command: npm install -g gulp-cli In this command, not much has changed compared to the original command where we installed the gulp package locally. We've only added a -g flag to the command, which instructs npm to install the package globally. On Windows, your console window should be opened under an administrator account in order to install an npm package globally. At first, this can be a little confusing, and for many packages it won't apply. Similar build systems actually separate these usages into two different packages that must be installed separately; once that is installed globally for command-line use and another installed locally in your project. Gulp was created so that both of these usages could be combined into a single package, and, based on where you install it, it could operate in different ways. Anatomy of a gulpfile Before we can begin writing tasks, we should take a look at the anatomy and structure of a gulpfile. Examining the code of a gulpfile will allow us to better understand what is happening as we run our tasks. Gulp started with four main methods:.task(), .src(), .watch(), and .dest(). The release of version 4.x introduced additional methods such as: .series() and .parallel(). In addition to the gulp API methods, each task will also make use of the node.js .pipe() method. This small list of methods is all that is needed to understand how to begin writing basic tasks. They each represent a specific purpose and will act as the building blocks of our gulpfile. The task() method The .task() method is the basic wrapper for which we create our tasks. Its syntax is .task(string, function). It takes two arguments—string value representing the name of the task and a function that will contain the code you wish to execute upon running that task. The src() method The .src() method is our input or how we gain access to the source files that we plan on modifying. It accepts either a single glob string or an array of glob strings as an argument. Globs are a pattern that we can use to make our paths more dynamic. When using globs, we can match an entire set of files with a single string using wildcard characters as opposed to listing them all separately. The syntax is for this method is .src(string || array).  The watch() method The .watch() method is used to specifically look for changes in our files. This will allow us to keep gulp running as we code so that we don't need to rerun gulp any time we need to process our tasks. This syntax is different between the 3.x and 4.x version. For version 3.x the syntax is—.watch(string || array, array) with the first argument being our paths/globs to watch and the second argument being the array of task names that need to be run when those files change. For version 4.x the syntax has changed a bit to allow for two new methods that provide more explicit control of the order in which tasks are executed. When using 4.x instead of passing in an array as the second argument, we will use either the .series() or .parallel() method like so—.watch(string || array, gulp.series() || gulp.parallel()). The dest() method The dest() method is used to set the output destination of your processed file. Most often, this will be used to output our data into a build or dist folder that will be either shared as a library or accessed by your application. The syntax for this method is .dest(string). The pipe() method The .pipe() method will allow us to pipe together smaller single-purpose plugins or applications into a pipechain. This is what gives us full control of the order in which we would need to process our files. The syntax for this method is .pipe(function). The parallel() and series() methods The parallel and series methods were added in version 4.x as a way to easily control whether your tasks are run together all at once or in a sequence one after the other. This is important if one of your tasks requires that other tasks complete before it can be ran successfully. When using these methods the arguments will be the string names of your tasks separated by a comma. The syntax for these methods is—.series(tasks) and .parallel(tasks); Understanding these methods will take you far, as these are the core elements of building your tasks. Next, we will need to put these methods together and explain how they all interact with one another to create a gulp task. Including modules/plugins When writing a gulpfile, you will always start by including the modules or plugins you are going to use in your tasks. These can be both gulp plugins or node.js modules, based on what your needs are. Gulp plugins are small node.js applications built for use inside of gulp to provide a single-purpose action that can be chained together to create complex operations for your data. Node.js modules serve a broader purpose and can be used with gulp or independently. Next, we can open our gulpfile.js file and add the following code: // Load Node Modules/Plugins var gulp = require('gulp'); var concat = require('gulp-concat'); var uglify = require('gulp-uglify'); The gulpfile.js file will look as shown in the following screenshot: In this code, we have included gulp and two gulp plugins: gulp-concat and gulp-uglify. As you can now see, including a plugin into your gulpfile is quite easy. After we install each module or plugin using npm, you simply use node.js' require() function and pass it in the name of the module. You then assign it to a new variable so that you can use it throughout your gulpfile. This is node.js' way of handling modularity, and because a gulpfile is essentially a small node.js application, it adopts this practice as well. Writing a task All tasks in gulp share a common structure. Having reviewed the five methods at the beginning of this section, you will already be familiar with most of it. Some tasks might end up being larger than others, but they still follow the same pattern. To better illustrate how they work, let's examine a bare skeleton of a task. This skeleton is the basic bone structure of each task we will be creating. Studying this structure will make it incredibly simple to understand how parts of gulp work together to create a task. An example of a sample task is as follows: gulp.task(name, function() { return gulp.src(path) .pipe(plugin) .pipe(plugin) .pipe(gulp.dest(path)); }); In the first line, we use the new gulp variable that we created a moment ago and access the .task() method. This creates a new task in our gulpfile. As you learned earlier, the task method accepts two arguments: a task name as a string and a callback function that will contain the actions we wish to run when this task is executed. Inside the callback function, we reference the gulp variable once more and then use the .src() method to provide the input to our task. As you learned earlier, the source method accepts a path or an array of paths to the files that we wish to process. Next, we have a series of three .pipe() methods. In each of these pipe methods, we will specify which plugin we would like to use. This grouping of pipes is what we call our pipechain. The data that we have provided gulp with in our source method will flow through our pipechain to be modified by each piped plugin that it passes through. The order of the pipe methods is entirely up to you. This gives you a great deal of control in how and when your data is modified. You may have noticed that the final pipe is a bit different. At the end of our pipechain, we have to tell gulp to move our modified file somewhere. This is where the .dest() method comes into play. As we mentioned earlier, the destination method accepts a path that sets the destination of the processed file as it reaches the end of our pipechain. If .src() is our input, then .dest() is our output. Reflection To wrap up, take a moment to look at a finished gulpfile and reflect on the information that we just covered. This is the completed gulpfile that we will be creating from scratch, so don't worry if you still feel lost. This is just an opportunity to recognize the patterns and syntaxes that we have been studying so far. We will begin creating this file step by step: // Load Node Modules/Plugins var gulp = require('gulp'); var concat = require('gulp-concat'); var uglify = require('gulp-uglify'); // Process Styles gulp.task('styles', function() {     return gulp.src('css/*.css')         .pipe(concat('all.css'))         .pipe(gulp.dest('dist/')); }); // Process Scripts gulp.task('scripts', function() {     return gulp.src('js/*.js')         .pipe(concat('all.js'))         .pipe(uglify())         .pipe(gulp.dest('dist/')); }); // Watch Files For Changes gulp.task('watch', function() {     gulp.watch('css/*.css', 'styles');     gulp.watch('js/*.js', 'scripts'); }); // Default Task gulp.task('default', gulp.parallel('styles', 'scripts', 'watch')); The gulpfile.js file will look as follows: Summary In this article, you installed node.js and learned the basics of how to use npm and understood how and why to install gulp both locally and globally. We also covered some of the core differences between the 3.x and 4.x versions of gulp and how they will affect your gulpfiles as we move forward. To wrap up the article, we took a small glimpse into the anatomy of a gulpfile to prepare us for writing our own gulpfiles from scratch. Resources for Article: Further resources on this subject: Performing Task with Gulp [article] Making a Web Server in Node.js [article] Developing Node.js Web Applications [article]
Read more
  • 0
  • 0
  • 17807

article-image-front-page-customization-moodle
Packt
23 Oct 2009
11 min read
Save for later

Front Page Customization in Moodle

Packt
23 Oct 2009
11 min read
Look and Feel: An Overview Moodle can be fully customized in terms of layout and branding. It has to be stressed that certain aspects of changing the look and feel require some design skills. While you as an administrator will be able to make most of the relevant adjustments, it might be necessary to get a professional designer involved, especially when it comes to styling. The two relevant components for customization are the Moodle front page and Moodle themes, though this article will focus only on Moodle front page. Before going into further details, let's try to understand which part is responsible for which element of the look and feel of your site. Have a look at the screenshot that would follow. It shows the front page of Moodle site after you are logged in as an administrator. It is not obvious which parts are driven by the Moodle theme and by the front page settings. The next table sheds some light on this: Element Settings Theme Other Logos - x - Logged-in information (location and font) - x - Language Drop Down - - x Site Administration block (position) x - - Available Courses block (position) x - - Available Courses block (content) - - x Course categories and Calendar block (position) x - - Course categories and Calendar block (icons, fonts, colors) - x - Footer text - x - Footer logo - x - Copyright statement - x -   While this list is by no means complete, it hopefully gives you an idea that the look and feel of your Moodle site is driven by a number of different elemen In short, the settings (mostly front page settings as well as a few related parameters) dictate what content users will see before and after they log on. The theme is responsible for the design scheme or branding, that is, the header and footer as well as colors, fonts, icons, and so on used throughout the site. Now let's move towards the core part of this article. Customizing Your Front Page The appearance of Moodle's front page changes after a user has logged in. The content and layout of the page before and after login can be customized to represent the identity of your organization. Look at the following screenshot. It is the same site that the preceding screenshot was taken from, but before a user has logged in. In this particular example, a Login block is shown on the left and the Course categories are displayed in the center, as opposed to the list of available courses. Front Page Settings To customize the front page, you either have to be logged in as Moodle administrator, or have front-page-related permissions in the Front Page context. From the Site Administration block, select Front Page | Front Page Settings. The screen showing all available parameters will be loaded displaying your current settings that are changeable. ts. Setting Description Full site name This is the name that appears in the browser's title bar. It is usually the full name of your organization, or the name of the dedicated course, or qualification the site is used for. Short name for site This is the name that appears as the first item in the breadcrumb trail. Front Page Description This description of the site will be displayed on the front page via the Site Description block. It can, therefore, only be displayed in the left or right column, never in the center of the front page. The description text is also picked up by the Google search engine spider, if allowed. Front Page Moodle can display up to four elements in the center column of the front page when not logged in. List of courses List of categories News items Combo list(categories and courses) The order of the elements is the same as the one chosen in the pull-down menus. Front page items when logged in Same as "Front Page", but used when logged in. Include a topic section If ticked, an additional topic section (just like the topic blocks in the center column of a topics-format course) appears on top of the front page's center column. It can contain any mix of resources or activities available in Moodle. It is very often used to provide information about the site. News items to show Number of news items that are displayed. Courses per page This is a threshold setting that is used when displaying courses within categories. If there are more courses in a category than specified, page navigation will be displayed at the top of the page. Also, when a combo list is used, course names are only displayed if the number is less than the specified threshold. For all other categories, only the number of courses is shown after the category name. Allow visible courses within hidden categories By default, courses in hidden categories are not shown unless the said setting is applied. Default frontpage role If logged-in users should be allowed to participate in front page activities, a default front page role should be set. The default is None.   Arranging Front Page Blocks To configure the left and right column areas with blocks, you have to turn on editing (using the Blocks editing on button). The menu includes blocks that are not available in courses such as Course/Site description and Main menu. Blocks are added to the front page in exactly the same way as in courses. To change their position, use the standard arrows. The Main Menu block allows you to add any installed Moodle resource or activity inside the block. For example, using labels and links to (internal or external) websites, you are able to create a menu-like structure on your front page. If the Include a topic section parameter has been selected in the Front Page settings, you have to edit the part and add any installed Moodle activity or resource. This topic section is usually used by organizations to add a welcome message to visitors, often accompanied by a picture or other multimedia content. Login From a Different Website The purpose of the Login block is for users to authenticate themselves by entering their username and password. It is possible to log into Moodle from a different website, maybe your organization's homepage, effectively avoiding the Login block. To implement this, you will have to add some HTML code on that page as shown: <form class="loginform" name="login" method="post" action="http://www.mysite.com/login/index.php">   <p>Username :     <input size="10" name="username" />   </p>   <p>Password :     <input size="10" name="password" type="password" />   </p>   <p>     <input name="Submit" value="Login" type="submit" />   </p></form> The form will pass the username and password to your Moodle system. You will have to replace www.mysite.com with your URL. This address has to be entered in the Alternate Login URL field at Users | Authentication | Manage authentication in the Site Administration block. Other Front Page Items The Moodle front page is treated as a standalone component in Moodle, and therefore has a top-level menu with a number of features that can all be accessed via the Front Page item in the Site Administration menu. Having now looked in detail at the front page settings, let's turn to examining the other available options. Front Page Roles The front page has its own context in which roles can be assigned to users. This allows a separate user to develop and maintain the front page without having access to any other elements in Moodle. Since the front page is treated as a course, a Teacher role is usually sufficient for this. Front Page Backup and Restore The front page has its own backup and restore facilities to back up and restore all elements of the front page including any content. The mechanism of performing backup and restore is the same as for course backups.   Front page backups are stored in the backupdata folder in the Site Files area, and can be accessed by anybody who is aware of the URL. It is therefore best to move the created ZIP files to a more secure location. Front Page Questions Since the Moodle front page is treated in the same way as a course, it also has its own question bank, which is used to store any questions used on front-page quizzes. For more information on quizzes and the question bank, go to the MoodleDocs at http://docs.moodle.org/en/Quiz . Site Files The files areas of all courses are separate from each other, that is, files in Moodle belong to a course and can only be accessed by users who have been granted appropriate rights. The difference between Site files and the files area of any other course is that files in Site files can be accessed without logging in. Files placed in this location are meant for the front page, but can be accessed from anywhere in the system. In fact, if the location is known, files can be even be accessed from outside Moodle. Make sure that in the Site files area, you only place files that are acceptable to be seen by users who are not authenticated on your Moodle system. Typical files to be placed in this area are any images you want to show on the front page (such as the logo of your organization) or any document that you want to be accessed (for example, the curriculum). However, it is also used for other files that are required to be accessible without access to a course, such as the Site Policy Agreement, which has to be accepted before starting Moodle. To access these publicly available Site files elsewhere (for example, as a resource within other courses), you have to copy the link location that has the format: http://mysite.com/file.php/1/file.doc. Allow Personalization via My Moodle By default, the same front page is displayed for all users on your Moodle system. To relax this restriction and to allow users to personalize their own front page, you have to activate the My Moodle feature via the Force users to use My Moodle setting in Appearance | My Moodle in the Site Administration block. Once enabled, Moodle creates a /my directory for each user (except administrators) at their first login, which is displayed instead of the main Moodle front page. It is a very flexible feature that is similar to a customizable dashboard, but requires some more disk space on your server. Once logged in, users will have the ability to edit their page by adding blocks to their My Moodle area. The center of the page will be populated by the main front page, for instance displaying a list of courses, that users cannot modify. Making Blocks Sticky There might be some blocks that you wish to "stick", that is, display on each My Moodle page, making them effectively compulsory blocks. For example, you might want to pin the Calendar block on the top right corner of each user's My Moodle page. To do this, go to Modules | Blocks | Sticky blocks in the Site Administration block and select My Moodle from the pull-down menu. You can now add any item from the pull-down menu in the Blocks block. If the block is single instance (that is, only one occurrence is allowed per page), the block will not be available for the user to choose from. If the user has already selected a particular block, a duplicate will appear on their site, which can be edited and deleted. To prevent users from editing their My Moodle pages, change the moodle/my: manageblocks capability in the Authenticated user role from Allow to Not set. The sticky block feature is also available for course pages. A course creator has the ability to add and position blocks inside a course unless they have been made sticky. Select the Course page item from the same menu to configure the sticky blocks for courses, as shown in the preceding screenshot. Summary After providing a general overview of look and feel elements in Moodle, the article covered front page customization. As mentioned earlier, the front page in Moodle is a course. This has advantages (you can do everything you can do in a course and a little bit more), but it also has certain limitations (you can only do what you can do in a course and might feel limited by this). However, some organizations are now using the Moodle front page as their homepage.
Read more
  • 0
  • 0
  • 17368

article-image-data-bindings-with-knockout-js
Vijin Boricha
23 Apr 2018
7 min read
Save for later

Data bindings with Knockout.js

Vijin Boricha
23 Apr 2018
7 min read
Today, we will learn about three data binding abilities of Knockout.js. Data bindings are attributes added by the framework for the purpose of data access between elements and view scope. While Observable arrays are efficient in accessing the list of objects with the number of operations on top of the display of the list using the foreach function, Knockout.js has provided three additional data binding abilities: Control-flow bindings Appearance bindings Interactive bindings Let us review these data bindings in detail in the following sections. Control-flow bindings As the name suggests, control-flow bindings help us access the data elements based on a certain condition. The if, if-not, and with are the control-flow bindings available from the Knockout.js. In the following example, we will be using if and with control-flow bindings. We have added a new attribute to the Employee object called age; we are displaying the age value in green only if it is greater than 20. Similarly, we have added another markedEmployee. With control-flow binding, we can limit the scope of access to that specific employee object in the following paragraph. Add the following code snippet to index.html and run the program to see the if and with control-flow bindings working: <!DOCTYPE html> <html> <head> <title>Knockout JS</title> </head> <body> <h1>Welcome to Knockout JS programming</h1> <table border="1" > <tr > <th colspan="2" style="padding:10px;"> <b>Employee Data - Organization : <span style="color:red" data-bind='text: organizationName'> </span> </b> </th> </tr> <tr> <td style="padding:10px;">Employee First Name:</td> <td style="padding:10px;"> <span data-bind='text: empFirstName'></span> </td> </tr> <tr> <td style="padding:10px;">Employee Last Name:</td> <td style="padding:10px;"> <span data-bind='text: empLastName'></span> </td> </tr> </table> <p>Organization Full Name : <span style="color:red" data-bind='text: orgFullName'></span> </p> <!-- Observable Arrays--> <h2>Observable Array Example : </h2> <table border="1"> <thead><tr> <th style="padding:10px;">First Name</th> <th style="padding:10px;">Last Name</th> <th style="padding:10px;">Age</th> </tr></thead> <tbody data-bind='foreach: organization'> <tr> <td style="padding:10px;" data-bind='text: firstName'></td> <td style="padding:10px;" data-bind='text: lastName'></td> <td data-bind="if: age() > 20" style="color: green;padding:10px;"> <span data-bind='text:age'></span> </td> </tr> </tbody> </table> <!-- with control flow bindings --> <p data-bind='with: markedEmployee'> Employee <strong data-bind="text: firstName() + ', ' + lastName()"> </strong> is marked with the age <strong data-bind='text: age'> </strong> </p> <h2>Add New Employee to Observable Array</h2> First Name : <input data-bind="value: newFirstName" /> Last Name : <input data-bind="value: newLastName" /> Age : <input data-bind="value: newEmpAge" /> <button data-bind='click: addEmployee'>Add Employee</button> <!-- JavaScript resources --> <script type='text/javascript' src='js/knockout-3.4.2.js'></script> <script type='text/javascript'> function Employee (firstName, lastName,age) { this.firstName = ko.observable(firstName); this.lastName = ko.observable(lastName); this.age = ko.observable(age); }; this.addEmployee = function() { this.organization.push(new Employee (employeeViewModel.newFirstName(), employeeViewModel.newLastName(), employeeViewModel.newEmpAge())); }; var employeeViewModel = { empFirstName: "Tony", empLastName: "Henry", //Observable organizationName: ko.observable("Sun"), newFirstName: ko.observable(""), newLastName: ko.observable(""), newEmpAge: ko.observable(""), //With control flow object markedEmployee: ko.observable(new Employee("Garry", "Parks", "65")), //Observable Arrays organization : ko.observableArray([ new Employee("John", "Kennedy", "24"), new Employee("Peter", "Hennes","18"), new Employee("Richmond", "Smith","54") ]) }; //Computed Observable employeeViewModel.orgFullName = ko.computed(function() { return employeeViewModel.organizationName() + " Limited"; }); ko.applyBindings(employeeViewModel); employeeViewModel.organizationName("Oracle"); </script> </body> </html> Run the preceding program to see the if control-flow acting on the Age field, and the with control-flow showing a marked employee record with age 65: Appearance bindings Appearance bindings deal with displaying the data from binding elements on view components in formats such as text and HTML, and applying styles with the help of a set of six bindings, as follows: Text: <value>—Sets the value to an element. Example: <td data-bind='text: name'></td> HTML: <value>—Sets the HTML value to an element. Example: //JavaScript: function Employee(firstname, lastname, age) { ... this.formattedName = ko.computed(function() { return "<strong>" + this.firstname() + "</strong>"; }, this); } //Html: <span data-bind='html: markedEmployee().formattedName'></span> Visible: <condition>—An element can be shown or hidden based on the condition. Example: <td data-bind='visible: age() > 20' style='color: green'> span data-bind='text:age'> CSS: <object>—An element can be associated with a CSS class. Example: //CSS: .strongEmployee { font-weight: bold; } //HTML: <span data-bind='text: formattedName, css: {strongEmployee}'> </span> Style: <object>—Associates an inline style to the element. Example: <span data-bind='text: age, style: {color: age() > 20 ? "green" :"red"}'> </span> Attr: <object>—Defines an attribute for the element. Example: <p><a data-bind='attr: {href: featuredEmployee().populatelink}'> View Employee</a></p> Interactive bindings Interactive bindings help the user interact with the form elements to be associated with corresponding viewmodel methods or events to be triggered in the pages. Knockout JS supports the following interactive bindings: Click: <method>—An element click invokes a ViewModel method. Example: <button data-bind='click: addEmployee'>Submit</button> Value:<property>—Associates the form element value to the ViewModel attribute. Example: <td>Age: <input data-bind='value: age' /></td> Event: <object>—With an user-initiated event, it invokes a method. Example: <p data-bind='event: {mouseover: showEmployee, mouseout: hideEmployee}'> Age: <input data-bind='value: Age' /> </p> Submit: <method>—With a form submit event, it can invoke a method. Example: <form data-bind="submit: addEmployee"> <!—Employee form fields --> <button type="submit">Submit</button> </form> Enable: <property>—Conditionally enables the form elements. Example: last name field is enabled only after adding first name field. Disable: <property>—Conditionally disables the form elements. Example: last name field is disabled after adding first name: <p>Last Name: <input data-bind='value: lastName, disable: firstName' /> </p> Checked: <property>—Associates a checkbox or radio element to the ViewModel attribute. Example: <p>Gender: <input data-bind='checked:gender' type='checkbox' /></p> Options: <array>—Defines a ViewModel array for the<select> element. Example: //Javascript: this.designations = ko.observableArray(['manager', 'administrator']); //Html: Designation: <select data-bind='options: designations'></select> selectedOptions: <array>—Defines the active/selected element from the <select> element. Example: Designation: <select data-bind='options: designations, optionsText:"Select", selectedOptions:defaultDesignation'> </select> hasfocus: <property>—Associates the focus attribute to the element. Example: First Name: <input data-bind='value: firstName, hasfocus: firstNameHasFocus' /> We learned about data binding abilities of Knockout.js. You can know more about external data access and Hybrid Mobile Application Development from the book Oracle JET for Developers. Read More Text and appearance bindings and form field bindings Getting to know KnockoutJS Templates    
Read more
  • 0
  • 0
  • 16888

article-image-building-movie-api-express
Packt
18 Feb 2016
22 min read
Save for later

Building A Movie API with Express

Packt
18 Feb 2016
22 min read
We will build a movie API that allows you to add actor and movie information to a database and connect actors with movies, and vice versa. This will give you a hands-on feel for what Express.js offers. We will cover the following topics in this article: Folder structure and organization Responding to CRUD operations Object modeling with Mongoose Generating unique IDs Testing (For more resources related to this topic, see here.) Folder structure and organization Folder structure is a very controversial topic. Though there are many clean ways to structure your project, we will use the following code for the remainder of our article: article +-- app.js +-- package.json +-- node_modules ¦+-- npm package folders +-- src ¦+-- lib ¦+-- models ¦+-- routes +-- test Let's take a look this at in detail: app.js: It is conventional to have the main app.js file in the root directory. The app.js is the entry point of our application and will be used to launch the server. package.json: As with any Node.js app, we have package.json in the root folder specifying our application name and version as well as all of our npm dependencies. node_modules: The node_modules folder and its content are generated via npm installation and should usually be ignored in your version control of choice because it depends on the platform the app runs on. Having said that, according to the npm FAQ, it is probably better to commit the node_modules folder as well. Check node_modules into git for things you deploy, such as websites and apps. Do not check node_modules into git for libraries and modules intended to be reused. Refer to the following article to read more about the rationale behind this: http://www.futurealoof.com/posts/nodemodules-in-git.html src: The src folder contains all the logic of the application. lib: Within the src folder, we have the lib folder, which contains the core of the application. This includes the middleware, routes, and creating the database connection. models: The models folder contains our mongoose models, which defines the structure and logic of the models we want to manipulate and save. routes: The routes folder contains the code for all the endpoints the API is able to serve. test: The test folder will contain our functional tests using Mocha as well as two other node modules, should and supertest, to make it easier to aim for 100 percent coverage. Responding to CRUD operations The term CRUD refers to the four basic operations one can perform on data: create, read, update, and delete. Express gives us an easy way to handle those operations by supporting the basic methods GET, POST, PUT, and DELETE: GET: This method is used to retrieve the existing data from the database. This can be used to read single or multiple rows (for SQL) or documents (for MongoDB) from the database. POST: This method is used to write new data into the database, and it is common to include a JSON payload that fits the data model. PUT: This method is used to update existing data in the database, and a JSON payload that fits the data model is often included for this method as well. DELETE: This method is used to remove an existing row or document from the database. Express 4 has dramatically changed from version 3. A lot of the core modules have been removed in order to make it even more lightweight and less dependent. Therefore, we have to explicitly require modules when needed. One helpful module is body-parser. It allows us to get a nicely formatted body when a POST or PUT HTTP request is received. We have to add this middleware before our business logic in order to use its result later. We write the following in src/lib/parser.js: var bodyParser = require('body-parser'); module;exports = function(app) { app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: false })); }; The preceding code is then used in src/lib/app.js as follows: var express = require('express'); var app = express(); require('./parser')(app); module.exports = app; The following example allows you to respond to a GET request on http://host/path. Once a request hits our API, Express will run it through the necessary middleware as well as the following function: app.get('/path/:id', function(req, res, next) { res.status(200).json({ hello: 'world'}); }); The first parameter is the path we want to handle a GET function. The path can contain parameters prefixed with :. Those path parameters will then be parsed in the request object. The second parameter is the callback that will be executed when the server receives the request. This function gets populated with three parameters: req, res, and next. The req parameter represents the HTTP request object that has been customized by Express and the middlewares we added in our applications. Using the path http://host/path/:id, suppose a GET request is sent to http://host/path/1?a=1&b=2. The req object would be the following: { params: { id: 1 }, query: { a: 1, b: 2 } } The params object is a representation of the path parameters. The query is the query string, which are the values stated after ? in the URL. In a POST request, there will often be a body in our request object as well, which includes the data we wish to place in our database. The res parameter represents the response object for that request. Some methods, such as status() or json(), are provided in order to tell Express how to respond to the client. Finally, the next() function will execute the next middleware defined in our application. Retrieving an actor with GET Retrieving a movie or actor from the database consists of submitting a GET request to the route: /movies/:id or /actors/:id. We will need a unique ID that refers to a unique movie or actor: app.get('/actors/:id', function(req, res, next) { //Find the actor object with this :id //Respond to the client }); Here, the URL parameter :id will be placed in our request object. Since we call the first variable in our callback function req as before, we can access the URL parameter by calling req.params.id. Since an actor may be in many movies and a movie may have many actors, we need a nested endpoint to reflect this as well: app.get('/actors/:id/movies', function(req, res, next) { //Find all movies the actor with this :id is in //Respond to the client }); If a bad GET request is submitted or no actor with the specified ID is found, then the appropriate status code bad request 400 or not found 404 will be returned. If the actor is found, then success request 200 will be sent back along with the actor information. On a success, the response JSON will look like this: { "_id": "551322589911fefa1f656cc5", "id": 1, "name": "AxiomZen", "birth_year": 2012, "__v": 0, "movies": [] } Creating a new actor with POST In our API, creating a new movie in the database involves submitting a POST request to /movies or /actors for a new actor: app.post('/actors', function(req, res, next) { //Save new actor //Respond to the client }); In this example, the user accessing our API sends a POST request with data that would be placed into request.body. Here, we call the first variable in our callback function req. Thus, to access the body of the request, we call req.body. The request body is sent as a JSON string; if an error occurs, a 400 (bad request) status would be sent back. Otherwise, a 201 (created) status is sent to the response object. On a success request, the response will look like the following: { "__v": 0, "id": 1, "name": "AxiomZen", "birth_year": 2012, "_id": "551322589911fefa1f656cc5", "movies": [] } Updating an actor with PUT To update a movie or actor entry, we first create a new route and submit a PUT request to /movies/:id or /actors /:id, where the id parameter is unique to an existing movie/actor. There are two steps to an update. We first find the movie or actor by using the unique id and then we update that entry with the body of the request object, as shown in the following code: app.put('/actors/:id', function(req, res) { //Find and update the actor with this :id //Respond to the client }); In the request, we would need request.body to be a JSON object that reflects the actor fields to be updated. The request.params.id would still be a unique identifier that refers to an existing actor in the database as before. On a successful update, the response JSON looks like this: { "_id": "551322589911fefa1f656cc5", "id": 1, "name": "Axiomzen", "birth_year": 99, "__v": 0, "movies": [] } Here, the response will reflect the changes we made to the data. Removing an actor with DELETE Deleting a movie is as simple as submitting a DELETE request to the same routes that were used earlier (specifying the ID). The actor with the appropriate id is found and then deleted: app.delete('/actors/:id', function(req, res) { //Remove the actor with this :id //Respond to the client }); If the actor with the unique id is found, it is then deleted and a response code of 204 is returned. If the actor cannot be found, a response code of 400 is returned. There is no response body for a DELETE() method; it will simply return the status code of 204 on a successful deletion. Our final endpoints for this simple app will be as follows: //Actor endpoints app.get('/actors', actors.getAll); app.post('/actors', actors.createOne); app.get('/actors/:id', actors.getOne); app.put('/actors/:id', actors.updateOne); app.delete('/actors/:id', actors.deleteOne) app.post('/actors/:id/movies', actors.addMovie); app.delete('/actors/:id/movies/:mid', actors.deleteMovie); //Movie endpoints app.get('/movies', movies.getAll); app.post('/movies', movies.createOne); app.get('/movies/:id', movies.getOne); app.put('/movies/:id', movies.updateOne); app.delete('/movies/:id', movies.deleteOne); app.post('/movies/:id/actors', movies.addActor); app.delete('/movies/:id/actors/:aid', movies.deleteActor); In Express 4, there is an alternative way to describe your routes. Routes that share a common URL, but use a different HTTP verb, can be grouped together as follows: app.route('/actors') .get(actors.getAll) .post(actors.createOne); app.route('/actors/:id') .get(actors.getOne) .put(actors.updateOne) .delete(actors.deleteOne); app.post('/actors/:id/movies', actors.addMovie); app.delete('/actors/:id/movies/:mid', actors.deleteMovie); app.route('/movies') .get(movies.getAll) .post(movies.createOne); app.route('/movies/:id') .get(movies.getOne) .put(movies.updateOne) .delete(movies.deleteOne); app.post('/movies/:id/actors', movies.addActor); app.delete('/movies/:id/actors/:aid', movies.deleteActor); Whether you prefer it this way or not is up to you. At least now you have a choice! We have not discussed the logic of the function being run for each endpoint. We will get to that shortly. Express allows us to easily CRUD our database objects, but how do we model our objects? Object modeling with Mongoose Mongoose is an object data modeling library (ODM) that allows you to define schemas for your data collections. You can find out more about Mongoose on the project website: http://mongoosejs.com/. To connect to a MongoDB instance using the mongoose variable, we first need to install npm and save Mongoose. The save flag automatically adds the module to your package.json with the latest version, thus, it is always recommended to install your modules with the save flag. For modules that you only need locally (for example, Mocha), you can use the savedev flag. For this project, we create a new file db.js under /src/lib/db.js, which requires Mongoose. The local connection to the mongodb database is made in mongoose.connect as follows: var mongoose = require('mongoose'); module.exports = function(app) { mongoose.connect('mongodb://localhost/movies', { mongoose: { safe: true } }, function(err) { if (err) { return console.log('Mongoose - connection error:', err); } }); return mongoose; }; In our movies database, we need separate schemas for actors and movies. As an example, we will go through object modeling in our actor database /src/models/actor.js by creating an actor schema as follows: // /src/models/actor.js var mongoose = require('mongoose'); var generateId = require('./plugins/generateId'); var actorSchema = new mongoose.Schema({ id: { type: Number, required: true, index: { unique: true } }, name: { type: String, required: true }, birth_year: { type: Number, required: true }, movies: [{ type : mongoose.Schema.ObjectId, ref : 'Movie' }] }); actorSchema.plugin(generateId()); module.exports = mongoose.model('Actor', actorSchema); Each actor has a unique id, a name, and a birth year. The entries also contain validators such as the type and boolean value that are required. The model is exported upon definition (module.exports), so that we can reuse it directly in the app. Alternatively, you could fetch each model through Mongoose using mongoose.model('Actor', actorSchema), but this would feel less explicitly coupled compared to our approach of directly requiring it. Similarly, we need a movie schema as well. We define the movie schema as follows: // /src/models/movies.js var movieSchema = new mongoose.Schema({ id: { type: Number, required: true, index: { unique: true } }, title: { type: String, required: true }, year: { type: Number, required: true }, actors: [{ type : mongoose.Schema.ObjectId, ref : 'Actor' }] }); movieSchema.plugin(generateId()); module.exports = mongoose.model('Movie', movieSchema); Generating unique IDs In both our movie and actor schemas, we used a plugin called generateId(). While MongoDB automatically generates ObjectID for each document using the _id field, we want to generate our own IDs that are more human readable and hence friendlier. We also would like to give the user the opportunity to select their own id of choice. However, being able to choose an id can cause conflicts. If you were to choose an id that already exists, your POST request would be rejected. We should autogenerate an ID if the user does not pass one explicitly. Without this plugin, if either an actor or a movie is created without an explicit ID passed along by the user, the server would complain since the ID is required. We can create middleware for Mongoose that assigns an id before we persist the object as follows: // /src/models/plugins/generateId.js module.exports = function() { return function generateId(schema){ schema.pre('validate',function(next, done) { var instance = this; var model = instance.model(instance.constructor.modelName); if( instance.id == null ) { model.findOne().sort("-id").exec(function(err,maxInstance) { if (err){ return done(err); } else { var maxId = maxInstance.id || 0; instance.id = maxId+1; done(); } }) } else { done(); } }) } }; There are a few important notes about this code. See what we did to get the var model? This makes the plugin generic so that it can be applied to multiple Mongoose schemas. Notice that there are two callbacks available: next and done. The next variable passes the code to the next pre-validation middleware. That's something you would usually put at the bottom of the function right after you make your asynchronous call. This is generally a good thing since one of the advantages of asynchronous calls is that you can have many things running at the same time. However, in this case, we cannot call the next variable because it would conflict with our model definition of id required. Thus, we just stick to using the done variable when the logic is complete. Another concern arises due to the fact that MongoDB doesn't support transactions, which means you may have to account for this function failing in some edge cases. For example, if two calls to POST /actor happen at the same time, they will both have their IDs auto incremented to the same value. Now that we have the code for our generateId() plugin, we require it in our actor and movie schema as follows: var generateId = require('./plugins/generateId'); actorSchema.plugin(generateId()); Validating your database Each key in the Mongoose schema defines a property that is associated with a SchemaType. For example, in our actors.js schema, the actor's name key is associated with a string SchemaType. String, number, date, buffer, boolean, mixed, objectId, and array are all valid schema types. In addition to schema types, numbers have min and max validators and strings have enum and match validators. Validation occurs when a document is being saved (.save()) and will return an error object, containing type, path, and value properties, if the validation has failed. Extracting functions to reusable middleware We can use our anonymous or named functions as middleware. To do so, we would export our functions by calling module.exports in routes/actors.js and routes/movies.js: Let's take a look at our routes/actors.js file. At the top of this file, we require the Mongoose schemas we defined before: var Actor = require('../models/actor'); This allows our variable actor to access our MongoDB using mongo functions such as find(), create(), and update(). It will follow the schema defined in the file /models/actor. Since actors are in movies, we will also need to require the Movie schema to show this relationship by the following. var Movie = require('../models/movie'); Now that we have our schema, we can begin defining the logic for the functions we described in endpoints. For example, the endpoint GET /actors/:id will retrieve the actor with the corresponding ID from our database. Let's call this function getOne(). It is defined as follows: getOne: function(req, res, next) { Actor.findOne({ id: req.params.id }) .populate('movies') .exec(function(err, actor) { if (err) return res.status(400).json(err); if (!actor) return res.status(404).json(); res.status(200).json(actor); }); }, Here, we use the mongo findOne() method to retrieve the actor with id: req.params.id. There are no joins in MongoDB so we use the .populate() method to retrieve the movies the actor is in. The .populate() method will retrieve documents from a separate collection based on its ObjectId. This function will return a status 400 if something went wrong with our Mongoose driver, a status 404 if the actor with :id is not found, and finally, it will return a status 200 along with the JSON of the actor object if an actor is found. We define all the functions required for the actor endpoints in this file. The result is as follows: // /src/routes/actors.js var Actor = require('../models/actor'); var Movie = require('../models/movie'); module.exports = { getAll: function(req, res, next) { Actor.find(function(err, actors) { if (err) return res.status(400).json(err); res.status(200).json(actors); }); }, createOne: function(req, res, next) { Actor.create(req.body, function(err, actor) { if (err) return res.status(400).json(err); res.status(201).json(actor); }); }, getOne: function(req, res, next) { Actor.findOne({ id: req.params.id }) .populate('movies') .exec(function(err, actor) { if (err) return res.status(400).json(err); if (!actor) return res.status(404).json(); res.status(200).json(actor); }); }, updateOne: function(req, res, next) { Actor.findOneAndUpdate({ id: req.params.id }, req.body,function(err, actor) { if (err) return res.status(400).json(err); if (!actor) return res.status(404).json(); res.status(200).json(actor); }); }, deleteOne: function(req, res, next) { Actor.findOneAndRemove({ id: req.params.id }, function(err) { if (err) return res.status(400).json(err); res.status(204).json(); }); }, addMovie: function(req, res, next) { Actor.findOne({ id: req.params.id }, function(err, actor) { if (err) return res.status(400).json(err); if (!actor) return res.status(404).json(); Movie.findOne({ id: req.body.id }, function(err, movie) { if (err) return res.status(400).json(err); if (!movie) return res.status(404).json(); actor.movies.push(movie); actor.save(function(err) { if (err) return res.status(500).json(err); res.status(201).json(actor); }); }) }); }, deleteMovie: function(req, res, next) { Actor.findOne({ id: req.params.id }, function(err, actor) { if (err) return res.status(400).json(err); if (!actor) return res.status(404).json(); actor.movies = []; actor.save(function(err) { if (err) return res.status(400).json(err); res.status(204).json(actor); }) }); } }; For all of our movie endpoints, we need the same functions but applied to the movie collection. After exporting these two files, we require them in app.js (/src/lib/app.js) by simply adding: require('../routes/movies'); require('../routes/actors'); By exporting our functions as reusable middleware, we keep our code clean and can refer to functions in our CRUD calls in the /routes folder. Testing Mocha is used as the test framework along with should.js and supertest. Testing supertest lets you test your HTTP assertions and testing API endpoints. The tests are placed in the root folder /test. Tests are completely separate from any of the source code and are written to be readable in plain English, that is, you should be able to follow along with what is being tested just by reading through them. Well-written tests with good coverage can serve as a readme for its API, since it clearly describes the behavior of the entire app. The initial setup to test our movies API is the same for both /test/actors.js and /test/movies.js: var should = require('should'); var assert = require('assert'); var request = require('supertest'); var app = require('../src/lib/app'); In src/test/actors.js, we test the basic CRUD operations: creating a new actor object, retrieving, editing, and deleting the actor object. An example test for the creation of a new actor is shown as follows: describe('Actors', function() { describe('POST actor', function(){ it('should create an actor', function(done){ var actor = { 'id': '1', 'name': 'AxiomZen', 'birth_year': '2012', }; request(app) .post('/actors') .send(actor) .expect(201, done) }); We can see that the tests are readable in plain English. We create a new POST request for a new actor to the database with the id of 1, name of AxiomZen, and birth_year of 2012. Then, we send the request with the .send() function. Similar tests are present for GET and DELETE requests as given in the following code: describe('GET actor', function() { it('should retrieve actor from db', function(done){ request(app) .get('/actors/1') .expect(200, done); }); describe('DELETE actor', function() { it('should remove a actor', function(done) { request(app) .delete('/actors/1') .expect(204, done); }); }); To test our PUT request, we will edit the name and birth_year of our first actor as follows: describe('PUT actor', function() { it('should edit an actor', function(done) { var actor = { 'name': 'ZenAxiom', 'birth_year': '2011' }; request(app) .put('/actors/1') .send(actor) .expect(200, done); }); it('should have been edited', function(done) { request(app) .get('/actors/1') .expect(200) .end(function(err, res) { res.body.name.should.eql('ZenAxiom'); res.body.birth_year.should.eql(2011); done(); }); }); }); The first part of the test modifies the actor name and birth_year keys, sends a PUT request for /actors/1 (1 is the actors id), and then saves the new information to the database. The second part of the test checks whether the database entry for the actor with id 1 has been changed. The name and birth_year values are checked against their expected values using .should.eql(). In addition to performing CRUD actions on the actor object, we can also perform these actions to the movies we add to each actor (associated by the actor's ID). The following snippet shows a test to add a new movie to our first actor (with the id of 1): describe('POST /actors/:id/movies', function() { it('should successfully add a movie to the actor',function(done) { var movie = { 'id': '1', 'title': 'Hello World', 'year': '2013' } request(app) .post('/actors/1/movies') .send(movie) .expect(201, done) }); }); it('actor should have array of movies now', function(done){ request(app) .get('/actors/1') .expect(200) .end(function(err, res) { res.body.movies.should.eql(['1']); done(); }); }); }); The first part of the test creates a new movie object with id, title, and year keys, and sends a POST request to add the movies as an array to the actor with id of 1. The second part of the test sends a GET request to retrieve the actor with id of 1, which should now include an array with the new movie input. We can similarly delete the movie entries as illustrated in the actors.js test file: describe('DELETE /actors/:id/movies/:movie_id', function() { it('should successfully remove a movie from actor', function(done){ request(app) .delete('/actors/1/movies/1') .expect(200, done); }); it('actor should no longer have that movie id', function(done){ request(app) .get('/actors/1') .expect(201) .end(function(err, res) { res.body.movies.should.eql([]); done(); }); }); }); Again, this code snippet should look familiar to you. The first part tests that sending a DELETE request specifying the actor ID and movie ID will delete that movie entry. In the second part, we make sure that the entry no longer exists by submitting a GET request to view the actor's details where no movies should be listed. In addition to ensuring that the basic CRUD operations work, we also test our schema validations. The following code tests to make sure two actors with the same ID do not exist (IDs are specified as unique): it('should not allow you to create duplicate actors', function(done) { var actor = { 'id': '1', 'name': 'AxiomZen', 'birth_year': '2012', }; request(app) .post('/actors') .send(actor) .expect(400, done); }); We should expect code 400 (bad request) if we try to create an actor who already exists in the database. A similar set of tests is present for tests/movies.js. The function and outcome of each test should be evident now. Summary In this article, we created a basic API that connects to MongoDB and supports CRUD methods. You should now be able to set up an API complete with tests, for any data, not just movies and actors! We hope you found that this article has laid a good foundation for the Express and API setup. To learn more about Express.js, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Mastering Web Application Development with Express(https://www.packtpub.com/web-development/mastering-web-application-development-express) Advanced Express Web Application Development(https://www.packtpub.com/web-development/advanced-express-web-application-development) Resources for Article: Further resources on this subject: Metal API: Get closer to the bare metal with Metal API [Article] Building a Basic Express Site [Article] Introducing Sails.js [Article]
Read more
  • 0
  • 0
  • 16596

article-image-setting-environment-aspnet-mvc-6
Packt
02 Nov 2016
9 min read
Save for later

Setting Up the Environment for ASP.NET MVC 6

Packt
02 Nov 2016
9 min read
In this article by Mugilan TS Raghupathi author of the book Learning ASP.NET Core MVC Programming explains the setup for getting started with programming in ASP.NET MVC 6. In any development project, it is vital to set up the right kind of development environment so that you can concentrate on the developing the solution rather than solving the environment issues or configuration problems. With respect to .NET, Visual Studio is the de-facto standard IDE (Integrated Development Environment) for building web applications in .NET. In this article, you'll be learning about the following topics: Purpose of IDE Different offerings of Visual Studio Installation of Visual Studio Community 2015 Creating your first ASP.NET MVC 5 project and project structure (For more resources related to this topic, see here.) Purpose of IDE First of all, let us see why we need an IDE, when you can type the code in Notepad, compile, and execute it. When you develop a web application, you might need the following things for you to be productive: Code editor: This is the text editor where you type your code. Your code-editor should be able to recognize different constructs such as the if condition, for loop of your programming language. In Visual Studio, all of your keywords would be highlighted in blue color. Intellisense: Intellisense is a context aware code-completion feature available in most of the modern IDEs including Visual Studio. One such example is, when you type a dot after an object, this Intellisense feature lists out all the methods available on the object. This helps the developers to write code faster and easier. Build/Publish: It would be helpful if you could build or publish the application using a single click or single command. Visual Studio provides several options out of the box to build a separate project or to build the complete solution at a single click. This makes the build and deployment of your application easier. Templates: Depending on the type of the application, you might have to create different folders and files along with the boilerplate code. So, it'll be very helpful if your IDE supports the creation of different kinds of templates. Visual Studio generates different kinds of templates with the code for ASP.Net Web Forms, MVC, and Web API to get you up and running. Ease of addition of items: Your IDE should allow you to add different kinds of items with ease. For example, you should be able to add an XML file without any issues. And if there is any problem with the structure of your XML file, it should be able to highlight the issue along with the information and help you to fix the issues. Visual Studio offerings There are different versions of Visual Studio 2015 available to satisfy the various needs of the developers/organizations. Primarily, there are four versions of Visual Studio 2015: Visual Studio Community Visual Studio Professional Visual Studio Enterprise Visual Studio Test Professional System requirements Visual Studio can be installed on computers installed with Operation System Windows 7 Service Pack1 and above. You can get to know the complete list of requirements from the following URL: https://www.visualstudio.com/en-us/downloads/visual-studio-2015-system-requirements-vs.aspx Visual Studio Community 2015 This is a fully featured IDE available for building desktops, web applications, and cloud services. It is available free of cost for individual users. You can download Visual Studio Community from the following URL: https://www.visualstudio.com/en-us/products/visual-studio-community-vs.aspx Throughout this book, we will be using the Visual Studio Community version for development as it is available free of cost to individual developers. Visual Studio Professional As the name implies, Visual Studio Professional is targeted at professional developers which contains features such as Code Lens for improving your team's productivity. It also has features for greater collaboration within the team. Visual Studio Enterprise Visual Studio Enterprise is a full blown version of Visual Studio with a complete set of features for collaboration, including a team foundation server, modeling, and testing. Visual Studio Test Professional Visual Studio Test Professional is primarily aimed for the testing team or the people who are involved in the testing which might include developers. In any software development methodology—either the waterfall model or agile—developers need to execute the development suite test cases for the code they are developing. Installation of Visual Studio Community Follow the given steps to install Visual Studio Community 2015: Visit the following link to download Visual Studio Community 2015: https://www.visualstudio.com/en-us/products/visual-studio-community-vs.aspx Click on the Download Community 2015 button. Save the file in a folder where you can retrieve it easily later: Run the downloaded executable file: Click on Run and the following screen will appear: There are two types of installation—default and custom installation. Default installation installs the most commonly used features and this will cover most of the use cases of the developer. Custom installation helps you to customize the components that you want to get installed, such as the following: Click on the Install button after selecting the installation type. Depending on your memory and processor speed, it will take 1 to 2 hours to install. Once all the components are installed, you will see the following Setup completed screen: Installation of ASP.NET 5 When we install the Visual Studio Community 2015 edition, ASP.NET 5 will not have been installed by default. As the ASP.NET MVC 6 application runs on top of ASP.NET 5, we need to install ASP.NET 5. There are couple of ways to install ASP.NET 5: Get ASP.NET 5 from https://get.asp.net/ Another option is to install from the New Project template in Visual Studio This option is bit easier as you don't need to search and install. The following are the detailed steps: Create a new project by selecting File | New Project or using the shortcut Ctrl + Shift + N: Select ASP.NET Web Application and enter the project name and click on OK: The following window will appear to select the template. Select the Get ASP.NET 5 RC option as shown in the following screenshot: When you click on OK in the preceding screen, the following window will appear: When you click on the Run or Save button in the preceding dialog, you will get the following screen asking for ASP.NET 5 Setup. Select the checkbox, I agree to the license terms and conditions and click on the Install button: Installation of ASP.NET 5 might take couple of hours and once it is completed you'll get the following screen: During the process of installation of ASP.NET 5 RC1 Update 1, it might ask you to close the Visual Studio. If asked, please do so. Project structure in ASP.Net 5 application Once the ASP.NET 5 RC1 is successfully installed, open the Visual Studio and create a new project and select the ASP.NET 5 Web Application as shown in the following screenshot: A new project will be created and the structure will be like the following: File-based project Whenever you add a file or folder in your file system (inside of our ASP.NET 5 project folder), the changes will be automatically reflected in your project structure. Support for full .NET and .NET core You could see a couple of references in the preceding project: DNX 4.5.1 and DNX Core 5.0. DNX 4.5.1 provides functionalities of full-blown .NET whereas DNX Core 5.0 supports only the core functionalities—which would be used if you are deploying the application across cross-platforms such as Apple OS X, Linux. The development and deployment of an ASP.NET MVC 6 application on a Linux machine will be explained in the book. The Project.json package Usually in an ASP.NET web application, we would be having the assemblies as references and the list of references in a C# project file. But in an ASP.NET 5 application, we have a JSON file by the name of Project.json, which will contain all the necessary configuration with all its .NET dependencies in the form of NuGet packages. This makes dependency management easier. NuGet is a package manager provided by Microsoft, which makes the package installation and uninstallation easier. Prior to NuGet, all the dependencies had to be installed manually. The dependencies section identifies the list of dependent packages available for the application. The frameworks section informs about the frameworks being supported for the application. The scripts section identifies the script to be executed during the build process of the application. Include and exclude properties can be used in any section to include or exclude any item. Controllers This folder contains all of your controller files. Controllers are responsible for handling the requests and communicating the models and generating the views for the same. Models All of your classes representing the domain data will be present in this folder. Views Views are files which contain your frontend components and are presented to the end users of the application. This folder contains all of your Razor View files. Migrations Any database-related migrations will be available in this folder. Database migrations are the C# files which contain the history of any database changes done through an Entity Framework (an ORM framework). This will be explained in detail in the book. The wwwroot folder This folder acts as a root folder and it is the ideal container to place all of your static files such as CSS and JavaScript files. All the files which are placed in wwwroot folder can be directly accessed from the path without going through the controller. Other files The appsettings.json file is the config file where you can configure application level settings. Bower, npm (Node Package Manager), and gulpfile.js are client-side technologies which are supported by ASP.NET 5 applications. Summary In this article, you have learnt about the offerings in Visual Studio. Step-by-step instructions are provided for the installation of the Visual Studio Community version—which is freely available for individual developers. We have also discussed the new project structure of the ASP.Net 5 application and the changes when compared to the previous versions. In this book, we are going to discuss the controllers and their roles and functionalities. We'll also build a controller and associated action methods and see how it works. Resources for Article: Further resources on this subject: Designing your very own ASP.NET MVC Application [article] Debugging Your .NET Application [article] Using ASP.NET Controls in SharePoint [article]
Read more
  • 0
  • 0
  • 16502
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-making-web-server-nodejs
Packt
25 Feb 2016
38 min read
Save for later

Making a Web Server in Node.js

Packt
25 Feb 2016
38 min read
In this article, we will cover the following topics: Setting up a router Serving static files Caching content in memory for immediate delivery Optimizing performance with streaming Securing against filesystem hacking exploits (For more resources related to this topic, see here.) One of the great qualities of Node is its simplicity. Unlike PHP or ASP, there is no separation between the web server and code, nor do we have to customize large configuration files to get the behavior we want. With Node, we can create the web server, customize it, and deliver content. All this can be done at the code level. This article demonstrates how to create a web server with Node and feed content through it, while implementing security and performance enhancements to cater for various situations. If we don't have Node installed yet, we can head to http://nodejs.org and hit the INSTALL button appearing on the homepage. This will download the relevant file to install Node on our operating system. Setting up a router In order to deliver web content, we need to make a Uniform Resource Identifier (URI) available. This recipe walks us through the creation of an HTTP server that exposes routes to the user. Getting ready First let's create our server file. If our main purpose is to expose server functionality, it's a general practice to call the server.js file (because the npm start command runs the node server.js command by default). We could put this new server.js file in a new folder. It's also a good idea to install and use supervisor. We use npm (the module downloading and publishing command-line application that ships with Node) to install. On the command-line utility, we write the following command: sudo npm -g install supervisor Essentially, sudo allows administrative privileges for Linux and Mac OS X systems. If we are using Node on Windows, we can drop the sudo part in any of our commands. The supervisor module will conveniently autorestart our server when we save our changes. To kick things off, we can start our server.js file with the supervisor module by executing the following command: supervisor server.js For more on possible arguments and the configuration of supervisor, check out https://github.com/isaacs/node-supervisor. How to do it... In order to create the server, we need the HTTP module. So let's load it and use the http.createServer method as follows: var http = require('http'); http.createServer(function (request, response) {   response.writeHead(200, {'Content-Type': 'text/html'});   response.end('Woohoo!'); }).listen(8080); Now, if we save our file and access localhost:8080 on a web browser or using curl, our browser (or curl) will exclaim Woohoo! But the same will occur at localhost:8080/foo. Indeed, any path will render the same behavior. So let's build in some routing. We can use the path module to extract the basename variable of the path (the final part of the path) and reverse any URI encoding from the client with decodeURI as follows: var http = require('http'); var path = require('path'); http.createServer(function (request, response) {   var lookup=path.basename(decodeURI(request.url)); We now need a way to define our routes. One option is to use an array of objects as follows: var pages = [   {route: '', output: 'Woohoo!'},   {route: 'about', output: 'A simple routing with Node example'},   {route: 'another page', output: function() {return 'Here's     '+this.route;}}, ]; Our pages array should be placed above the http.createServer call. Within our server, we need to loop through our array and see if the lookup variable matches any of our routes. If it does, we can supply the output. We'll also implement some 404 error-related handling as follows: http.createServer(function (request, response) {   var lookup=path.basename(decodeURI(request.url));   pages.forEach(function(page) {     if (page.route === lookup) {       response.writeHead(200, {'Content-Type': 'text/html'});       response.end(typeof page.output === 'function'       ? page.output() : page.output);     }   });   if (!response.finished) {      response.writeHead(404);      response.end('Page Not Found!');   } }).listen(8080); How it works... The callback function we provide to http.createServer gives us all the functionality we need to interact with our server through the request and response objects. We use request to obtain the requested URL and then we acquire its basename with path. We also use decodeURI, without which another page route would fail as our code would try to match another%20page against our pages array and return false. Once we have our basename, we can match it in any way we want. We could send it in a database query to retrieve content, use regular expressions to effectuate partial matches, or we could match it to a filename and load its contents. We could have used a switch statement to handle routing, but our pages array has several advantages—it's easier to read, easier to extend, and can be seamlessly converted to JSON. We loop through our pages array using forEach. Node is built on Google's V8 engine, which provides us with a number of ECMAScript 5 (ES5) features. These features can't be used in all browsers as they're not yet universally implemented, but using them in Node is no problem! The forEach function is an ES5 implementation; the ES3 way is to use the less convenient for loop. While looping through each object, we check its route property. If we get a match, we write the 200 OK status and content-type headers, and then we end the response with the object's output property. The response.end method allows us to pass a parameter to it, which it writes just before finishing the response. In response.end, we have used a ternary operator (?:) to conditionally call page.output as a function or simply pass it as a string. Notice that the another page route contains a function instead of a string. The function has access to its parent object through the this variable, and allows for greater flexibility in assembling the output we want to provide. In the event that there is no match in our forEach loop, response.end would never be called and therefore the client would continue to wait for a response until it times out. To avoid this, we check the response.finished property and if it's false, we write a 404 header and end the response. The response.finished flag is affected by the forEach callback, yet it's not nested within the callback. Callback functions are mostly used for asynchronous operations, so on the surface this looks like a potential race condition; however, the forEach loop does not operate asynchronously; it blocks until all loops are complete. There's more... There are many ways to extend and alter this example. There are also some great non-core modules available that do the legwork for us. Simple multilevel routing Our routing so far only deals with a single level path. A multilevel path (for example, /about/node) will simply return a 404 error message. We can alter our object to reflect a subdirectory-like structure, remove path, and use request.url for our routes instead of path.basename as follows: var http=require('http'); var pages = [   {route: '/', output: 'Woohoo!'},   {route: '/about/this', output: 'Multilevel routing with Node'},   {route: '/about/node', output: 'Evented I/O for V8 JavaScript.'},   {route: '/another page', output: function () {return 'Here's '     + this.route; }} ]; http.createServer(function (request, response) {   var lookup = decodeURI(request.url); When serving static files, request.url must be cleaned prior to fetching a given file. Check out the Securing against filesystem hacking exploits recipe in this article. Multilevel routing could be taken further; we could build and then traverse a more complex object as follows: {route: 'about', childRoutes: [   {route: 'node', output: 'Evented I/O for V8 JavaScript'},   {route: 'this', output: 'Complex Multilevel Example'} ]} After the third or fourth level, this object would become a leviathan to look at. We could alternatively create a helper function to define our routes that essentially pieces our object together for us. Alternatively, we could use one of the excellent noncore routing modules provided by the open source Node community. Excellent solutions already exist that provide helper methods to handle the increasing complexity of scalable multilevel routing. Parsing the querystring module Two other useful core modules are url and querystring. The url.parse method allows two parameters: first the URL string (in our case, this will be request.url) and second a Boolean parameter named parseQueryString. If the url.parse method is set to true, it lazy loads the querystring module (saving us the need to require it) to parse the query into an object. This makes it easy for us to interact with the query portion of a URL as shown in the following code: var http = require('http'); var url = require('url'); var pages = [   {id: '1', route: '', output: 'Woohoo!'},   {id: '2', route: 'about', output: 'A simple routing with Node     example'},   {id: '3', route: 'another page', output: function () {     return 'Here's ' + this.route; }   }, ]; http.createServer(function (request, response) {   var id = url.parse(decodeURI(request.url), true).query.id;   if (id) {     pages.forEach(function (page) {       if (page.id === id) {         response.writeHead(200, {'Content-Type': 'text/html'});         response.end(typeof page.output === 'function'         ? page.output() : page.output);       }     });   }   if (!response.finished) {     response.writeHead(404);     response.end('Page Not Found');   } }).listen(8080); With the added id properties, we can access our object data by, for instance, localhost:8080?id=2. The routing modules There's an up-to-date list of various routing modules for Node at https://github.com/joyent/node/wiki/modules#wiki-web-frameworks-routers. These community-made routers cater to various scenarios. It's important to research the activity and maturity of a module before taking it into a production environment. NodeZoo (http://nodezoo.com) is an excellent tool to research the state of a NODE module. See also The Serving static files and Securing against filesystem hacking exploits recipes discussed in this article Serving static files If we have information stored on disk that we want to serve as web content, we can use the fs (filesystem) module to load our content and pass it through the http.createServer callback. This is a basic conceptual starting point to serve static files; as we will learn in the following recipes, there are much more efficient solutions. Getting ready We'll need some files to serve. Let's create a directory named content, containing the following three files: index.html styles.css script.js Add the following code to the HTML file index.html: <html>   <head>     <title>Yay Node!</title>     <link rel=stylesheet href=styles.css type=text/css>     <script src=script.js type=text/javascript></script>   </head>   <body>     <span id=yay>Yay!</span>   </body> </html> Add the following code to the script.js JavaScript file: window.onload = function() { alert('Yay Node!'); }; And finally, add the following code to the CSS file style.css: #yay {font-size:5em;background:blue;color:yellow;padding:0.5em} How to do it... As in the previous recipe, we'll be using the core modules http and path. We'll also need to access the filesystem, so we'll require fs as well. With the help of the following code, let's create the server and use the path module to check if a file exists: var http = require('http'); var path = require('path'); var fs = require('fs'); http.createServer(function (request, response) {   var lookup = path.basename(decodeURI(request.url)) ||     'index.html';   var f = 'content/' + lookup;   fs.exists(f, function (exists) {     console.log(exists ? lookup + " is there"     : lookup + " doesn't exist");   }); }).listen(8080); If we haven't already done it, then we can initialize our server.js file by running the following command: supervisor server.js Try loading localhost:8080/foo. The console will say foo doesn't exist, because it doesn't. The localhost:8080/script.js URL will tell us that script.js is there, because it is. Before we can serve a file, we are supposed to let the client know the content-type header, which we can determine from the file extension. So let's make a quick map using an object as follows: var mimeTypes = {   '.js' : 'text/javascript',   '.html': 'text/html',   '.css' : 'text/css' }; We could extend our mimeTypes map later to support more types. Modern browsers may be able to interpret certain mime types (like text/javascript), without the server sending a content-type header, but older browsers or less common mime types will rely upon the correct content-type header being sent from the server. Remember to place mimeTypes outside of the server callback, since we don't want to initialize the same object on every client request. If the requested file exists, we can convert our file extension into a content-type header by feeding path.extname into mimeTypes and then pass our retrieved content-type to response.writeHead. If the requested file doesn't exist, we'll write out a 404 error and end the response as follows: //requires variables, mimeType object... http.createServer(function (request, response) {     var lookup = path.basename(decodeURI(request.url)) ||     'index.html';   var f = 'content/' + lookup;   fs.exists(f, function (exists) {     if (exists) {       fs.readFile(f, function (err, data) {         if (err) {response.writeHead(500); response.end('Server           Error!'); return; }         var headers = {'Content-type': mimeTypes[path.extname          (lookup)]};         response.writeHead(200, headers);         response.end(data);       });       return;     }     response.writeHead(404); //no such file found!     response.end();   }); }).listen(8080); At the moment, there is still no content sent to the client. We have to get this content from our file, so we wrap the response handling in an fs.readFile method callback as follows: //http.createServer, inside fs.exists: if (exists) {   fs.readFile(f, function(err, data) {     var headers={'Content-type': mimeTypes[path.extname(lookup)]};     response.writeHead(200, headers);     response.end(data);   });  return; } Before we finish, let's apply some error handling to our fs.readFile callback as follows: //requires variables, mimeType object... //http.createServer,  path exists, inside if(exists):  fs.readFile(f, function(err, data) {     if (err) {response.writeHead(500); response.end('Server       Error!');  return; }     var headers = {'Content-type': mimeTypes[path.extname      (lookup)]};     response.writeHead(200, headers);     response.end(data);   }); return; } Notice that return stays outside of the fs.readFile callback. We are returning from the fs.exists callback to prevent further code execution (for example, sending the 404 error). Placing a return statement in an if statement is similar to using an else branch. However, the pattern of the return statement inside the if loop is encouraged instead of if else, as it eliminates a level of nesting. Nesting can be particularly prevalent in Node due to performing a lot of asynchronous tasks, which tend to use callback functions. So, now we can navigate to localhost:8080, which will serve our index.html file. The index.html file makes calls to our script.js and styles.css files, which our server also delivers with appropriate mime types. We can see the result in the following screenshot: This recipe serves to illustrate the fundamentals of serving static files. Remember, this is not an efficient solution! In a real world situation, we don't want to make an I/O call every time a request hits the server; this is very costly especially with larger files. In the following recipes, we'll learn better ways of serving static files. How it works... Our script creates a server and declares a variable called lookup. We assign a value to lookup using the double pipe || (OR) operator. This defines a default route if path.basename is empty. Then we pass lookup to a new variable that we named f in order to prepend our content directory to the intended filename. Next, we run f through the fs.exists method and check the exist parameter in our callback to see if the file is there. If the file does exist, we read it asynchronously using fs.readFile. If there is a problem accessing the file, we write a 500 server error, end the response, and return from the fs.readFile callback. We can test the error-handling functionality by removing read permissions from index.html as follows: chmod -r index.html Doing so will cause the server to throw the 500 server error status code. To set things right again, run the following command: chmod +r index.html chmod is a Unix-type system-specific command. If we are using Windows, there's no need to set file permissions in this case. As long as we can access the file, we grab the content-type header using our handy mimeTypes mapping object, write the headers, end the response with data loaded from the file, and finally return from the function. If the requested file does not exist, we bypass all this logic, write a 404 error message, and end the response. There's more... The favicon icon file is something to watch out for. We will explore the file in this section. The favicon gotcha When using a browser to test our server, sometimes an unexpected server hit can be observed. This is the browser requesting the default favicon.ico icon file that servers can provide. Apart from the initial confusion of seeing additional hits, this is usually not a problem. If the favicon request does begin to interfere, we can handle it as follows: if (request.url === '/favicon.ico') {   console.log('Not found: ' + f);   response.end();   return; } If we wanted to be more polite to the client, we could also inform it of a 404 error by using response.writeHead(404) before issuing response.end. See also The Caching content in memory for immediate delivery recipe The Optimizing performance with streaming recipe The Securing against filesystem hacking exploits recipe Caching content in memory for immediate delivery Directly accessing storage on each client request is not ideal. For this task, we will explore how to enhance server efficiency by accessing the disk only on the first request, caching the data from file for that first request, and serving all further requests out of the process memory. Getting ready We are going to improve upon the code from the previous task, so we'll be working with server.js and in the content directory, with index.html, styles.css, and script.js. How to do it... Let's begin by looking at our following script from the previous recipe Serving Static Files: var http = require('http'); var path = require('path'); var fs = require('fs');    var mimeTypes = {   '.js' : 'text/javascript',   '.html': 'text/html',   '.css' : 'text/css' };   http.createServer(function (request, response) {   var lookup = path.basename(decodeURI(request.url)) ||     'index.html';   var f = 'content/'+lookup;   fs.exists(f, function (exists) {     if (exists) {       fs.readFile(f, function(err,data) {         if (err) {           response.writeHead(500); response.end('Server Error!');           return;         }         var headers = {'Content-type': mimeTypes[path.extname          (lookup)]};         response.writeHead(200, headers);         response.end(data);       });     return;     }     response.writeHead(404); //no such file found!     response.end('Page Not Found');   }); } We need to modify this code to only read the file once, load its contents into memory, and respond to all requests for that file from memory afterwards. To keep things simple and preserve maintainability, we'll extract our cache handling and content delivery into a separate function. So above http.createServer, and below mimeTypes, we'll add the following: var cache = {}; function cacheAndDeliver(f, cb) {   if (!cache[f]) {     fs.readFile(f, function(err, data) {       if (!err) {         cache[f] = {content: data} ;       }       cb(err, data);     });     return;   }   console.log('loading ' + f + ' from cache');   cb(null, cache[f].content); } //http.createServer A new cache object and a new function called cacheAndDeliver have been added to store our files in memory. Our function takes the same parameters as fs.readFile so we can replace fs.readFile in the http.createServer callback while leaving the rest of the code intact as follows: //...inside http.createServer:   fs.exists(f, function (exists) {   if (exists) {     cacheAndDeliver(f, function(err, data) {       if (err) {         response.writeHead(500);         response.end('Server Error!');         return; }       var headers = {'Content-type': mimeTypes[path.extname(f)]};       response.writeHead(200, headers);       response.end(data);     }); return;   } //rest of path exists code (404 handling)... When we execute our server.js file and access localhost:8080 twice, consecutively, the second request causes the console to display the following output: loading content/index.html from cache loading content/styles.css from cache loading content/script.js from cache How it works... We defined a function called cacheAndDeliver, which like fs.readFile, takes a filename and callback as parameters. This is great because we can pass the exact same callback of fs.readFile to cacheAndDeliver, padding the server out with caching logic without adding any extra complexity visually to the inside of the http.createServer callback. As it stands, the worth of abstracting our caching logic into an external function is arguable, but the more we build on the server's caching abilities, the more feasible and useful this abstraction becomes. Our cacheAndDeliver function checks to see if the requested content is already cached. If not, we call fs.readFile and load the data from disk. Once we have this data, we may as well hold onto it, so it's placed into the cache object referenced by its file path (the f variable). The next time anyone requests the file, cacheAndDeliver will see that we have the file stored in the cache object and will issue an alternative callback containing the cached data. Notice that we fill the cache[f] property with another new object containing a content property. This makes it easier to extend the caching functionality in the future as we would just have to place extra properties into our cache[f] object and supply logic that interfaces with these properties accordingly. There's more... If we were to modify the files we are serving, the changes wouldn't be reflected until we restart the server. We can do something about that. Reflecting content changes To detect whether a requested file has changed since we last cached it, we must know when the file was cached and when it was last modified. To record when the file was last cached, let's extend the cache[f] object as follows: cache[f] = {content: data,timestamp: Date.now() // store a Unix                                                 // time stamp }; That was easy! Now let's find out when the file was updated last. The fs.stat method returns an object as the second parameter of its callback. This object contains the same useful information as the command-line GNU (GNU's Not Unix!) coreutils stat. The fs.stat function supplies three time-related properties: last accessed (atime), last modified (mtime), and last changed (ctime). The difference between mtime and ctime is that ctime will reflect any alterations to the file, whereas mtime will only reflect alterations to the content of the file. Consequently, if we changed the permissions of a file, ctime would be updated but mtime would stay the same. We want to pay attention to permission changes as they happen so let's use the ctime property as shown in the following code: //requires and mimeType object.... var cache = {}; function cacheAndDeliver(f, cb) {   fs.stat(f, function (err, stats) {     if (err) { return console.log('Oh no!, Error', err); }     var lastChanged = Date.parse(stats.ctime),     isUpdated = (cache[f]) && lastChanged  > cache[f].timestamp;     if (!cache[f] || isUpdated) {       fs.readFile(f, function (err, data) {         console.log('loading ' + f + ' from file');         //rest of cacheAndDeliver   }); //end of fs.stat } If we're using Node on Windows, we may have to substitute ctime with mtime, since ctime supports at least Version 0.10.12. The contents of cacheAndDeliver have been wrapped in an fs.stat callback, two variables have been added, and the if(!cache[f]) statement has been modified. We parse the ctime property of the second parameter dubbed stats using Date.parse to convert it to milliseconds since midnight, January 1st, 1970 (the Unix epoch) and assign it to our lastChanged variable. Then we check whether the requested file's last changed time is greater than when we cached the file (provided the file is indeed cached) and assign the result to our isUpdated variable. After that, it's merely a case of adding the isUpdated Boolean to the conditional if(!cache[f]) statement via the || (or) operator. If the file is newer than our cached version (or if it isn't yet cached), we load the file from disk into the cache object. See also The Optimizing performance with streaming recipe discussed in this article Optimizing performance with streaming Caching content certainly improves upon reading a file from disk for every request. However, with fs.readFile, we are reading the whole file into memory before sending it out in a response object. For better performance, we can stream a file from disk and pipe it directly to the response object, sending data straight to the network socket a piece at a time. Getting ready We are building on our code from the last example, so let's get server.js, index.html, styles.css, and script.js ready. How to do it... We will be using fs.createReadStream to initialize a stream, which can be piped to the response object. In this case, implementing fs.createReadStream within our cacheAndDeliver function isn't ideal because the event listeners of fs.createReadStream will need to interface with the request and response objects, which for the sake of simplicity would preferably be dealt with in the http.createServer callback. For brevity's sake, we will discard our cacheAndDeliver function and implement basic caching within the server callback as follows: //...snip... requires, mime types, createServer, lookup and f //  vars...   fs.exists(f, function (exists) {   if (exists) {     var headers = {'Content-type': mimeTypes[path.extname(f)]};     if (cache[f]) {       response.writeHead(200, headers);       response.end(cache[f].content);       return;    } //...snip... rest of server code... Later on, we will fill cache[f].content while we are interfacing with the readStream object. The following code shows how we use fs.createReadStream: var s = fs.createReadStream(f); The preceding code will return a readStream object that streams the file, which is pointed at by variable f. The readStream object emits events that we need to listen to. We can listen with addEventListener or use the shorthand on method as follows: var s = fs.createReadStream(f).on('open', function () {   //do stuff when the readStream opens }); Because createReadStream returns the readStream object, we can latch our event listener straight onto it using method chaining with dot notation. Each stream is only going to open once; we don't need to keep listening to it. Therefore, we can use the once method instead of on to automatically stop listening after the first event occurrence as follows: var s = fs.createReadStream(f).once('open', function () {   //do stuff when the readStream opens }); Before we fill out the open event callback, let's implement some error handling as follows: var s = fs.createReadStream(f).once('open', function () {   //do stuff when the readStream opens }).once('error', function (e) {   console.log(e);   response.writeHead(500);   response.end('Server Error!'); }); The key to this whole endeavor is the stream.pipe method. This is what enables us to take our file straight from disk and stream it directly to the network socket via our response object as follows: var s = fs.createReadStream(f).once('open', function () {   response.writeHead(200, headers);   this.pipe(response); }).once('error', function (e) {   console.log(e);   response.writeHead(500);   response.end('Server Error!'); }); But what about ending the response? Conveniently, stream.pipe detects when the stream has ended and calls response.end for us. There's one other event we need to listen to, for caching purposes. Within our fs.exists callback, underneath the createReadStream code block, we write the following code: fs.stat(f, function(err, stats) {   var bufferOffset = 0;   cache[f] = {content: new Buffer(stats.size)};   s.on('data', function (chunk) {     chunk.copy(cache[f].content, bufferOffset);     bufferOffset += chunk.length;   }); }); //end of createReadStream We've used the data event to capture the buffer as it's being streamed, and copied it into a buffer that we supplied to cache[f].content, using fs.stat to obtain the file size for the file's cache buffer. For this case, we're using the classic mode data event instead of the readable event coupled with stream.read() (see http://nodejs.org/api/stream.html#stream_readable_read_size_1) because it best suits our aim, which is to grab data from the stream as soon as possible. How it works... Instead of the client waiting for the server to load the entire file from disk prior to sending it to the client, we use a stream to load the file in small ordered pieces and promptly send them to the client. With larger files, this is especially useful as there is minimal delay between the file being requested and the client starting to receive the file. We did this by using fs.createReadStream to start streaming our file from disk. The fs.createReadStream method creates a readStream object, which inherits from the EventEmitter class. The EventEmitter class accomplishes the evented part pretty well. Due to this, we'll be using listeners instead of callbacks to control the flow of stream logic. We then added an open event listener using the once method since we want to stop listening to the open event once it is triggered. We respond to the open event by writing the headers and using the stream.pipe method to shuffle the incoming data straight to the client. If the client becomes overwhelmed with processing, stream.pipe applies backpressure, which means that the incoming stream is paused until the backlog of data is handled. While the response is being piped to the client, the content cache is simultaneously being filled. To achieve this, we had to create an instance of the Buffer class for our cache[f].content property. A Buffer class must be supplied with a size (or array or string), which in our case is the size of the file. To get the size, we used the asynchronous fs.stat method and captured the size property in the callback. The data event returns a Buffer variable as its only callback parameter. The default value of bufferSize for a stream is 64 KB; any file whose size is less than the value of the bufferSize property will only trigger one data event because the whole file will fit into the first chunk of data. But for files that are greater than the value of the bufferSize property, we have to fill our cache[f].content property one piece at a time. Changing the default readStream buffer size We can change the buffer size of our readStream object by passing an options object with a bufferSize property as the second parameter of fs.createReadStream. For instance, to double the buffer, you could use fs.createReadStream(f,{bufferSize: 128 * 1024});. We cannot simply concatenate each chunk with cache[f].content because this will coerce binary data into string format, which, though no longer in binary format, will later be interpreted as binary. Instead, we have to copy all the little binary buffer chunks into our binary cache[f].content buffer. We created a bufferOffset variable to assist us with this. Each time we add another chunk to our cache[f].content buffer, we update our new bufferOffset property by adding the length of the chunk buffer to it. When we call the Buffer.copy method on the chunk buffer, we pass bufferOffset as the second parameter, so our cache[f].content buffer is filled correctly. Moreover, operating with the Buffer class renders performance enhancements with larger files because it bypasses the V8 garbage-collection methods, which tend to fragment a large amount of data, thus slowing down Node's ability to process them. There's more... While streaming has solved the problem of waiting for files to be loaded into memory before delivering them, we are nevertheless still loading files into memory via our cache object. With larger files or a large number of files, this could have potential ramifications. Protecting against process memory overruns Streaming allows for intelligent and minimal use of memory for processing large memory items. But even with well-written code, some apps may require significant memory. There is a limited amount of heap memory. By default, V8's memory is set to 1400 MB on 64-bit systems and 700 MB on 32-bit systems. This can be altered by running node with --max-old-space-size=N, where N is the amount of megabytes (the actual maximum amount that it can be set to depends upon the OS, whether we're running on a 32-bit or 64-bit architecture—a 32-bit may peak out around 2 GB and of course the amount of physical RAM available). The --max-old-space-size method doesn't apply to buffers, since it applies to the v8 heap (memory allocated for JavaScript objects and primitives) and buffers are allocated outside of the v8 heap. If we absolutely had to be memory intensive, we could run our server on a large cloud platform, divide up the logic, and start new instances of node using the child_process class, or better still the higher level cluster module. There are other more advanced ways to increase the usable memory, including editing and recompiling the v8 code base. The http://blog.caustik.com/2012/04/11/escape-the-1-4gb-v8-heap-limit-in-node-js link has some tips along these lines. In this case, high memory usage isn't necessarily required and we can optimize our code to significantly reduce the potential for memory overruns. There is less benefit to caching larger files because the slight speed improvement relative to the total download time is negligible, while the cost of caching them is quite significant in ratio to our available process memory. We can also improve cache efficiency by implementing an expiration time on cache objects, which can then be used to clean the cache, consequently removing files in low demand and prioritizing high demand files for faster delivery. Let's rearrange our cache object slightly as follows: var cache = {   store: {},   maxSize : 26214400, //(bytes) 25mb } For a clearer mental model, we're making a distinction between the cache object as a functioning entity and the cache object as a store (which is a part of the broader cache entity). Our first goal is to only cache files under a certain size; we've defined cache.maxSize for this purpose. All we have to do now is insert an if condition within the fs.stat callback as follows: fs.stat(f, function (err, stats) {   if (stats.size<cache.maxSize) {     var bufferOffset = 0;     cache.store[f] = {content: new Buffer(stats.size),       timestamp: Date.now() };     s.on('data', function (data) {       data.copy(cache.store[f].content, bufferOffset);       bufferOffset += data.length;     });   } }); Notice that we also slipped in a new timestamp property into our cache.store[f] method. This is for our second goal—cleaning the cache. Let's extend cache as follows: var cache = {   store: {},   maxSize: 26214400, //(bytes) 25mb   maxAge: 5400 * 1000, //(ms) 1 and a half hours   clean: function(now) {     var that = this;     Object.keys(this.store).forEach(function (file) {       if (now > that.store[file].timestamp + that.maxAge) {         delete that.store[file];       }     });   } }; So in addition to maxSize, we've created a maxAge property and added a clean method. We call cache.clean at the bottom of the server with the help of the following code: //all of our code prior   cache.clean(Date.now()); }).listen(8080); //end of the http.createServer The cache.clean method loops through the cache.store function and checks to see if it has exceeded its specified lifetime. If it has, we remove it from the store. One further improvement and then we're done. The cache.clean method is called on each request. This means the cache.store function is going to be looped through on every server hit, which is neither necessary nor efficient. It would be better if we clean the cache, say, every two hours or so. We'll add two more properties to cache—cleanAfter to specify the time between cache cleans, and cleanedAt to determine how long it has been since the cache was last cleaned, as follows: var cache = {   store: {},   maxSize: 26214400, //(bytes) 25mb   maxAge : 5400 * 1000, //(ms) 1 and a half hours   cleanAfter: 7200 * 1000,//(ms) two hours   cleanedAt: 0, //to be set dynamically   clean: function (now) {     if (now - this.cleanAfter>this.cleanedAt) {       this.cleanedAt = now;       that = this;       Object.keys(this.store).forEach(function (file) {         if (now > that.store[file].timestamp + that.maxAge) {           delete that.store[file];         }       });     }   } }; So we wrap our cache.clean method in an if statement, which will allow a loop through cache.store only if it has been longer than two hours (or whatever cleanAfter is set to) since the last clean. See also The Securing against filesystem hacking exploits recipe discussed in this article Securing against filesystem hacking exploits For a Node app to be insecure, there must be something an attacker can interact with for exploitation purposes. Due to Node's minimalist approach, the onus is on the programmer to ensure that their implementation doesn't expose security flaws. This recipe will help identify some security risk anti-patterns that could occur when working with the filesystem. Getting ready We'll be working with the same content directory as we did in the previous recipes. But we'll start a new insecure_server.js file (there's a clue in the name!) from scratch to demonstrate mistaken techniques. How to do it... Our previous static file recipes tend to use path.basename to acquire a route, but this ignores intermediate paths. If we accessed localhost:8080/foo/bar/styles.css, our code would take styles.css as the basename property and deliver content/styles.css to us. How about we make a subdirectory in our content folder? Call it subcontent and move our script.js and styles.css files into it. We'd have to alter our script and link tags in index.html as follows: <link rel=stylesheet type=text/css href=subcontent/styles.css> <script src=subcontent/script.js type=text/javascript></script> We can use the url module to grab the entire pathname property. So let's include the url module in our new insecure_server.js file, create our HTTP server, and use pathname to get the whole requested path as follows: var http = require('http'); var url = require('url'); var fs = require('fs');   http.createServer(function (request, response) {   var lookup = url.parse(decodeURI(request.url)).pathname;   lookup = (lookup === "/") ? '/index.html' : lookup;   var f = 'content' + lookup;   console.log(f);   fs.readFile(f, function (err, data) {     response.end(data);   }); }).listen(8080); If we navigate to localhost:8080, everything works great! We've gone multilevel, hooray! For demonstration purposes, a few things have been stripped out from the previous recipes (such as fs.exists); but even with them, this code presents the same security hazards if we type the following: curl localhost:8080/../insecure_server.js Now we have our server's code. An attacker could also access /etc/passwd with a few attempts at guessing its relative path as follows: curl localhost:8080/../../../../../../../etc/passwd If we're using Windows, we can download and install curl from http://curl.haxx.se/download.html. In order to test these attacks, we have to use curl or another equivalent because modern browsers will filter these sort of requests. As a solution, what if we added a unique suffix to each file we wanted to serve and made it mandatory for the suffix to exist before the server coughs it up? That way, an attacker could request /etc/passwd or our insecure_server.js file because they wouldn't have the unique suffix. To try this, let's copy the content folder and call it content-pseudosafe, and rename our files to index.html-serve, script.js-serve, and styles.css-serve. Let's create a new server file and name it pseudosafe_server.js. Now all we have to do is make the -serve suffix mandatory as follows: //requires section ...snip... http.createServer(function (request, response) {   var lookup = url.parse(decodeURI(request.url)).pathname;   lookup = (lookup === "/") ? '/index.html-serve'     : lookup + '-serve';   var f = 'content-pseudosafe' + lookup; //...snip... rest of the server code... For feedback purposes, we'll also include some 404 handling with the help of fs.exists as follows: //requires, create server etc fs.exists(f, function (exists) {   if (!exists) {     response.writeHead(404);     response.end('Page Not Found!');     return;   } //read file etc So, let's start our pseudosafe_server.js file and try out the same exploit by executing the following command: curl -i localhost:8080/../insecure_server.js We've used the -i argument so that curl will output the headers. The result? A 404, because the file it's actually looking for is ../insecure_server.js-serve, which doesn't exist. So what's wrong with this method? Well it's inconvenient and prone to error. But more importantly, an attacker can still work around it! Try this by typing the following: curl localhost:8080/../insecure_server.js%00/index.html And voilà! There's our server code again. The solution to our problem is path.normalize, which cleans up our pathname before it gets to fs.readFile as shown in the following code: http.createServer(function (request, response) {   var lookup = url.parse(decodeURI(request.url)).pathname;   lookup = path.normalize(lookup);   lookup = (lookup === "/") ? '/index.html' : lookup;   var f = 'content' + lookup } Prior recipes haven't used path.normalize and yet they're still relatively safe. The path.basename method gives us the last part of the path, thus removing any preceding double dot paths (../) that would take an attacker higher up the directory hierarchy than should be allowed. How it works... Here we have two filesystem exploitation techniques: the relative directory traversal and poison null byte attacks. These attacks can take different forms, such as in a POST request or from an external file. They can have different effects—if we were writing to files instead of reading them, an attacker could potentially start making changes to our server. The key to security in all cases is to validate and clean any data that comes from the user. In insecure_server.js, we pass whatever the user requests to our fs.readFile method. This is foolish because it allows an attacker to take advantage of the relative path functionality in our operating system by using ../, thus gaining access to areas that should be off limits. By adding the -serve suffix, we didn't solve the problem, we put a plaster on it, which can be circumvented by the poison null byte. The key to this attack is the %00 value, which is a URL hex code for the null byte. In this case, the null byte blinds Node to the ../insecure_server.js portion, but when the same null byte is sent through to our fs.readFile method, it has to interface with the kernel. But the kernel gets blinded to the index.html part. So our code sees index.html but the read operation sees ../insecure_server.js. This is known as null byte poisoning. To protect ourselves, we could use a regex statement to remove the ../ parts of the path. We could also check for the null byte and spit out a 400 Bad Request statement. But we don't have to, because path.normalize filters out the null byte and relative parts for us. There's more... Let's further delve into how we can protect our servers when it comes to serving static files. Whitelisting If security was an extreme priority, we could adopt a strict whitelisting approach. In this approach, we would create a manual route for each file we are willing to deliver. Anything not on our whitelist would return a 404 error. We can place a whitelist array above http.createServer as follows: var whitelist = [   '/index.html',   '/subcontent/styles.css',   '/subcontent/script.js' ]; And inside our http.createServer callback, we'll put an if statement to check if the requested path is in the whitelist array, as follows: if (whitelist.indexOf(lookup) === -1) {   response.writeHead(404);   response.end('Page Not Found!');   return; } And that's it! We can test this by placing a file non-whitelisted.html in our content directory and then executing the following command: curl -i localhost:8080/non-whitelisted.html This will return a 404 error because non-whitelisted.html isn't on the whitelist. Node static The module's wiki page (https://github.com/joyent/node/wiki/modules#wiki-web-frameworks-static) has a list of static file server modules available for different purposes. It's a good idea to ensure that a project is mature and active before relying upon it to serve your content. The node-static module is a well-developed module with built-in caching. It's also compliant with the RFC2616 HTTP standards specification, which defines how files should be delivered over HTTP. The node-static module implements all the essentials discussed in this article and more. For the next example, we'll need the node-static module. You could install it by executing the following command: npm install node-static The following piece of code is slightly adapted from the node-static module's GitHub page at https://github.com/cloudhead/node-static: var static = require('node-static'); var fileServer = new static.Server('./content'); require('http').createServer(function (request, response) {   request.addListener('end', function () {     fileServer.serve(request, response);   }); }).listen(8080); The preceding code will interface with the node-static module to handle server-side and client-side caching, use streams to deliver content, and filter out relative requests and null bytes, among other things. Summary To learn more about Node.js and creating web servers, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Node Cookbook Second Edition (https://www.packtpub.com/web-development/node-cookbook-second-edition) Node.js Design Patterns (https://www.packtpub.com/web-development/nodejs-design-patterns) Node Web Development Second Edition (https://www.packtpub.com/web-development/node-web-development-second-edition) Resources for Article: Further resources on this subject: Working With Commands And Plugins [article] Node.js Fundamentals And Asynchronous Javascript [article] Building A Movie API With Express [article]
Read more
  • 0
  • 0
  • 16435

article-image-understanding-php-basics
Packt
17 Feb 2016
27 min read
Save for later

Understanding PHP basics

Packt
17 Feb 2016
27 min read
In this article by Antonio Lopez Zapata, the author of the book Learning PHP 7, you need to understand not only the syntax of the language, but also its grammatical rules, that is, when and why to use each element of the language. Luckily, for you, some languages come from the same root. For example, Spanish and French are romance languages as they both evolved from spoken Latin; this means that these two languages share a lot of rules, and learning Spanish if you already know French is much easier. (For more resources related to this topic, see here.) Programming languages are quite the same. If you already know another programming language, it will be very easy for you to go through this chapter. If it is your first time though, you will need to understand from scratch all the grammatical rules, so it might take some more time. But fear not! We are here to help you in this endeavor. In this chapter, you will learn about these topics: PHP in web applications Control structures Functions PHP in web applications Even though the main purpose of this chapter is to show you the basics of PHP, doing so in a reference-manual way is not interesting enough. If we were to copy paste what the official documentation says, you might as well go there and read it by yourself. Instead, let's not forget the main purpose of this book and your main goal—to write web applications with PHP. We will show you how can you apply everything you are learning as soon as possible, before you get too bored. In order to do that, we will go through the journey of building an online bookstore. At the very beginning, you might not see the usefulness of it, but that is just because we still haven't seen all that PHP can do. Getting information from the user Let's start by building a home page. In this page, we are going to figure out whether the user is looking for a book or just browsing. How do we find this out? The easiest way right now is to inspect the URL that the user used to access our application and extract some information from there. Save this content as your index.php file: <?php $looking = isset($_GET['title']) || isset($_GET['author']); ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <p>You lookin'? <?php echo (int) $looking; ?></p> <p>The book you are looking for is</p> <ul> <li><b>Title</b>: <?php echo $_GET['title']; ?></li> <li><b>Author</b>: <?php echo $_GET['author']; ?></li> </ul> </body> </html> And now, access http://localhost:8000/?author=Harper Lee&title=To Kill a Mockingbird. You will see that the page is printing some of the information that you passed on to the URL. For each request, PHP stores in an array—called $_GET- all the parameters that are coming from the query string. Each key of the array is the name of the parameter, and its associated value is the value of the parameter. So, $_GET contains two entries: $_GET['author'] contains Harper Lee and $_GET['title'] contains To Kill a Mockingbird. On the first highlighted line, we are assigning a Boolean value to the $looking variable. If either $_GET['title'] or $_GET['author'] exists, this variable will be true; otherwise, false. Just after that, we close the PHP tag and then we start printing some HTML, but as you can see, we are actually mixing HTML with PHP code. Another interesting line here is the second highlighted line. We are printing the content of $looking, but before that, we cast the value. Casting means forcing PHP to transform a type of value to another one. Casting a Boolean to an integer means that the resultant value will be 1 if the Boolean is true or 0 if the Boolean is false. As $looking is true since $_GET contains valid keys, the page shows 1. If we try to access the same page without sending any information as in http://localhost:8000, the browser will say "Are you looking for a book? 0". Depending on the settings of your PHP configuration, you will see two notice messages complaining that you are trying to access the keys of the array that do not exist. Casting versus type juggling We already knew that when PHP needs a specific type of variable, it will try to transform it, which is called type juggling. But PHP is quite flexible, so sometimes, you have to be the one specifying the type that you need. When printing something with echo, PHP tries to transform everything it gets into strings. Since the string version of the false Boolean is an empty string, this would not be useful for our application. Casting the Boolean to an integer first assures that we will see a value, even if it is just "0". HTML forms HTML forms are one of the most popular ways to collect information from users. They consist a series of fields called inputs in the HTML world and a final submit button. In HTML, the form tag contains two attributes: the action points, where the form will be submitted and method that specifies which HTTP method the form will use—GET or POST. Let's see how it works. Save the following content as login.html and go to http://localhost:8000/login.html: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore - Login</title> </head> <body> <p>Enter your details to login:</p> <form action="authenticate.php" method="post"> <label>Username</label> <input type="text" name="username" /> <label>Password</label> <input type="password" name="password" /> <input type="submit" value="Login"/> </form> </body> </html> This form contains two fields, one for the username and one for the password. You can see that they are identified by the name attribute. If you try to submit this form, the browser will show you a Page Not Found message, as it is trying to access http://localhost:8000/authenticate.phpand the web server cannot find it. Let's create it then: <?php $submitted = !empty($_POST); ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <p>Form submitted? <?php echo (int) $submitted; ?></p> <p>Your login info is</p> <ul> <li><b>username</b>: <?php echo $_POST['username']; ?></li> <li><b>password</b>: <?php echo $_POST['password']; ?></li> </ul> </body> </html> As with $_GET, $_POST is an array that contains the parameters received by POST. In this piece of code, we are first asking whether that array is not empty—note the ! operator. Afterwards, we just display the information received, just as in index.php. Note that the keys of the $_POST array are the values for the name argument of each input field. Control structures So far, our files have been executed line by line. Due to that, we are getting some notices on some scenarios, such as when the array does not contain what we are looking for. Would it not be nice if we could choose which lines to execute? Control structures to the rescue! A control structure is like a traffic diversion sign. It directs the execution flow depending on some predefined conditions. There are different control structures, but we can categorize them in conditionals and loops. A conditional allows us to choose whether to execute a statement or not. A loop will execute a statement as many times as you need. Let's take a look at each one of them. Conditionals A conditional evaluates a Boolean expression, that is, something that returns a value. If the expression is true, it will execute everything inside its block of code. A block of code is a group of statements enclosed by {}. Let's see how it works: <?php echo "Before the conditional."; if (4 > 3) { echo "Inside the conditional."; } if (3 > 4) { echo "This will not be printed."; } echo "After the conditional."; In this piece of code, we are using two conditionals. A conditional is defined by the keyword if followed by a Boolean expression in parentheses and by a block of code. If the expression is true, it will execute the block; otherwise, it will skip it. You can increase the power of conditionals by adding the keyword else. This tells PHP to execute a block of code if the previous conditions were not satisfied. Let's see an example: if (2 > 3) { echo "Inside the conditional."; } else { echo "Inside the else."; } This will execute the code inside else as the condition of if was not satisfied. Finally, you can also add an elseif keyword followed by another condition and block of code to continue asking PHP for more conditions. You can add as many elseif as you need after if. If you add else, it has to be the last one of the chain of conditions. Also keep in mind that as soon as PHP finds a condition that resolves to true, it will stop evaluating the rest of the conditions: <?php if (4 > 5) { echo "Not printed"; } elseif (4 > 4) { echo "Not printed"; } elseif (4 == 4) { echo "Printed."; } elseif (4 > 2) { echo "Not evaluated."; } else { echo "Not evaluated."; } if (4 == 4) { echo "Printed"; } In this last example, the first condition that evaluates to true is the one that is highlighted. After that, PHP does not evaluate any more conditions until a new if starts. With this knowledge, let's try to clean up a bit of our application, executing statements only when needed. Copy this code to your index.php file: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <p> <?php if (isset($_COOKIE[username'])) { echo "You are " . $_COOKIE['username']; } else { echo "You are not authenticated."; } ?> </p> <?php if (isset($_GET['title']) && isset($_GET['author'])) { ?> <p>The book you are looking for is</p> <ul> <li><b>Title</b>: <?php echo $_GET['title']; ?></li> <li><b>Author</b>: <?php echo $_GET['author']; ?></li> </ul> <?php } else { ?> <p>You are not looking for a book?</p> <?php } ?> </body> </html> In this new code, we are mixing conditionals and HTML code in two different ways. The first one opens a PHP tag and adds an if-else clause that will print whether we are authenticated or not with echo. No HTML is merged within the conditionals, which makes it clear. The second option—the second highlighted block—shows an uglier solution, but this is sometimes necessary. When you have to print a lot of HTML code, echo is not that handy, and it is better to close the PHP tag; print all the HTML you need and then open the tag again. You can do that even inside the code block of an if clause, as you can see in the code. Mixing PHP and HTML If you feel like the last file we edited looks rather ugly, you are right. Mixing PHP and HTML is confusing, and you have to avoid it by all means. Let's edit our authenticate.php file too, as it is trying to access $_POST entries that might not be there. The new content of the file would be as follows: <?php $submitted = isset($_POST['username']) && isset($_POST['password']); if ($submitted) { setcookie('username', $_POST['username']); } ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <?php if ($submitted): ?> <p>Your login info is</p> <ul> <li><b>username</b>: <?php echo $_POST['username']; ?></li> <li><b>password</b>: <?php echo $_POST['password']; ?></li> </ul> <?php else: ?> <p>You did not submitted anything.</p> <?php endif; ?> </body> </html> This code also contains conditionals, which we already know. We are setting a variable to know whether we've submitted a login or not and to set the cookies if we have. However, the highlighted lines show you a new way of including conditionals with HTML. This way, tries to be more readable when working with HTML code, avoiding the use of {} and instead using : and endif. Both syntaxes are correct, and you should use the one that you consider more readable in each case. Switch-case Another control structure similar to if-else is switch-case. This structure evaluates only one expression and executes the block depending on its value. Let's see an example: <?php switch ($title) { case 'Harry Potter': echo "Nice story, a bit too long."; break; case 'Lord of the Rings': echo "A classic!"; break; default: echo "Dunno that one."; break; } The switch case takes an expression; in this case, a variable. It then defines a series of cases. When the case matches the current value of the expression, PHP executes the code inside it. As soon as PHP finds break, it will exit switch-case. In case none of the cases are suitable for the expression, if there is a default case         , PHP will execute it, but this is optional. You also need to know that breaks are mandatory if you want to exit switch-case. If you do not specify any, PHP will keep on executing statements, even if it encounters a new case. Let's see a similar example but without breaks: <?php $title = 'Twilight'; switch ($title) { case 'Harry Potter': echo "Nice story, a bit too long."; case 'Twilight': echo 'Uh...'; case 'Lord of the Rings': echo "A classic!"; default: echo "Dunno that one."; } If you test this code in your browser, you will see that it is printing "Uh...A classic!Dunno that one.". PHP found that the second case is valid so it executes its content. But as there are no breaks, it keeps on executing until the end. This might be the desired behavior sometimes, but not usually, so we need to be careful when using it! Loops Loops are control structures that allow you to execute certain statements several times—as many times as you need. You might use them on several different scenarios, but the most common one is when interacting with arrays. For example, imagine you have an array with elements but you do not know what is in it. You want to print all its elements so you loop through all of them. There are four types of loops. Each of them has their own use cases, but in general, you can transform one type of loop into another. Let's see them closely While While is the simplest of the loops. It executes a block of code until the expression to evaluate returns false. Let's see one example: <?php $i = 1; while ($i < 4) { echo $i . " "; $i++; } Here, we are defining a variable with the value 1. Then, we have a while clause in which the expression to evaluate is $i < 4. This loop will execute the content of the block of code until that expression is false. As you can see, inside the loop we are incrementing the value of $i by 1 each time, so after 4 iterations, the loop will end. Check out the output of that script, and you will see "0 1 2 3". The last value printed is 3, so by that time, $i was 3. After that, we increased its value to 4, so when the while was evaluating whether $i < 4, the result was false. Whiles and infinite loops One of the most common problems with while loops is creating an infinite loop. If you do not add any code inside while, which updates any of the variables considered in the while expression so it can be false at some point, PHP will never exit the loop! For This is the most complex of the four loops. For defines an initialization expression, an exit condition, and the end of the iteration expression. When PHP first encounters the loop, it executes what is defined as the initialization expression. Then, it evaluates the exit condition, and if it resolves to true, it enters the loop. After executing everything inside the loop, it executes the end of the iteration expression. Once this is done, it will evaluate the end condition again, going through the loop code and the end of iteration expression until it evaluates to false. As always, an example will help clarify this: <?php for ($i = 1; $i < 10; $i++) { echo $i . " "; } The initialization expression is $i = 1 and is executed only the first time. The exit condition is $i < 10, and it is evaluated at the beginning of each iteration. The end of the iteration expression is $i++, which is executed at the end of each iteration. This example prints numbers from 1 to 9. Another more common usage of the for loop is with arrays: <?php $names = ['Harry', 'Ron', 'Hermione']; for ($i = 0; $i < count($names); $i++) { echo $names[$i] . " "; } In this example, we have an array of names. As it is defined as a list, its keys will be 0, 1, and 2. The loop initializes the $i variable to 0, and it will iterate until the value of $i is not less than the amount of elements in the array 3 The first iteration $i is 0, the second will be 1, and the third one will be 2. When $i is 3, it will not enter the loop as the exit condition evaluates to false. On each iteration, we are printing the content of the $i position of the array; hence, the result of this code will be all three names in the array. We careful with exit conditions It is very common to set an exit condition. This is not exactly what we need, especially with arrays. Remember that arrays start with 0 if they are a list, so an array of 3 elements will have entries 0, 1, and 2. Defining the exit condition as $i <= count($array) will cause an error on your code, as when $i is 3, it also satisfies the exit condition and will try to access to the key 3, which does not exist. Foreach The last, but not least, type of loop is foreach. This loop is exclusive for arrays, and it allows you to iterate an array entirely, even if you do not know its keys. There are two options for the syntax, as you can see in these examples: <?php $names = ['Harry', 'Ron', 'Hermione']; foreach ($names as $name) { echo $name . " "; } foreach ($names as $key => $name) { echo $key . " -> " . $name . " "; } The foreach loop accepts an array; in this case, $names. It specifies a variable, which will contain the value of the entry of the array. You can see that we do not need to specify any end condition, as PHP will know when the array has been iterated. Optionally, you can specify a variable that will contain the key of each iteration, as in the second loop. Foreach loops are also useful with maps, where the keys are not necessarily numeric. The order in which PHP will iterate the array will be the same order in which you used to insert the content in the array. Let's use some loops in our application. We want to show the available books in our home page. We have the list of books in an array, so we will have to iterate all of them with a foreach loop, printing some information from each one. Append the following code to the body tag in index.php: <?php endif; $books = [ [ 'title' => 'To Kill A Mockingbird', 'author' => 'Harper Lee', 'available' => true, 'pages' => 336, 'isbn' => 9780061120084 ], [ 'title' => '1984', 'author' => 'George Orwell', 'available' => true, 'pages' => 267, 'isbn' => 9780547249643 ], [ 'title' => 'One Hundred Years Of Solitude', 'author' => 'Gabriel Garcia Marquez', 'available' => false, 'pages' => 457, 'isbn' => 9785267006323 ], ]; ?> <ul> <?php foreach ($books as $book): ?> <li> <i><?php echo $book['title']; ?></i> - <?php echo $book['author']; ?> <?php if (!$book['available']): ?> <b>Not available</b> <?php endif; ?> </li> <?php endforeach; ?> </ul> The highlighted code shows a foreach loop using the : notation, which is better when mixing it with HTML. It iterates all the $books arrays, and for each book, it will print some information as a HTML list. Also note that we have a conditional inside a loop, which is perfectly fine. Of course, this conditional will be executed for each entry in the array, so you should keep the block of code of your loops as simple as possible. Functions A function is a reusable block of code that, given an input, performs some actions and optionally returns a result. You already know several predefined functions, such as empty, in_array, or var_dump. These functions come with PHP so you do not have to reinvent the wheel, but you can create your own very easily. You can define functions when you identify portions of your application that have to be executed several times or just to encapsulate some functionality. Function declaration Declaring a function means to write it down so that it can be used later. A function has a name, takes arguments, and has a block of code. Optionally, it can define what kind of value is returning. The name of the function has to follow the same rules as variable names; that is, it has to start by a letter or underscore and can contain any letter, number, or underscore. It cannot be a reserved word. Let's see a simple example: function addNumbers($a, $b) { $sum = $a + $b; return $sum; } $result = addNumbers(2, 3); Here, the function's name is addNumbers, and it takes two arguments: $a and $b. The block of code defines a new variable $sum that is the sum of both the arguments and then returns its content with return. In order to use this function, you just need to call it by its name, sending all the required arguments, as shown in the highlighted line. PHP does not support overloaded functions. Overloading refers to the ability of declaring two or more functions with the same name but different arguments. As you can see, you can declare the arguments without knowing what their types are, so PHP would not be able to decide which function to use. Another important thing to note is the variable scope. We are declaring a $sum variable inside the block of code, so once the function ends, the variable will not be accessible any more. This means that the scope of variables declared inside the function is just the function itself. Furthermore, if you had a $sum variable declared outside the function, it would not be affected at all since the function cannot access that variable unless we send it as an argument. Function arguments A function gets information from outside via arguments. You can define any number of arguments—including 0. These arguments need at least a name so that they can be used inside the function, and there cannot be two arguments with the same name. When invoking the function, you need to send the arguments in the same order as we declared them. A function may contain optional arguments; that is, you are not forced to provide a value for those arguments. When declaring the function, you need to provide a default value for those arguments, so in case the user does not provide a value, the function will use the default one: function addNumbers($a, $b, $printResult = false) { $sum = $a + $b; if ($printResult) { echo 'The result is ' . $sum; } return $sum; } $sum1 = addNumbers(1, 2); $sum1 = addNumbers(3, 4, false); $sum1 = addNumbers(5, 6, true); // it will print the result This new function takes two mandatory arguments and an optional one. The default value is false, and is used as a normal value inside the function. The function will print the result of the sum if the user provides true as the third argument, which happens only the third time that the function is invoked. For the first two times, $printResult is set to false. The arguments that the function receives are just copies of the values that the user provided. This means that if you modify these arguments inside the function, it will not affect the original values. This feature is known as sending arguments by a value. Let's see an example: function modify($a) { $a = 3; } $a = 2; modify($a); var_dump($a); // prints 2 We are declaring the $a variable with the value 2, and then we are calling the modify method, sending $a. The modify method modifies the $a argument, setting its value to 3. However, this does not affect the original value of $a, which reminds to 2 as you can see in the var_dump function. If what you want is to actually change the value of the original variable used in the invocation, you need to pass the argument by reference. To do that, you add & in front of the argument when declaring the function: function modify(&$a) { $a = 3; } Now, after invoking the modify function, $a will be always 3. Arguments by value versus by reference PHP allows you to do it, and in fact, some native functions of PHP use arguments by reference—remember the array sorting functions; they did not return the sorted array; instead, they sorted the array provided. But using arguments by reference is a way of confusing developers. Usually, when someone uses a function, they expect a result, and they do not want their provided arguments to be modified. So, try to avoid it; people will be grateful! The return statement You can have as many return statements as you want inside your function, but PHP will exit the function as soon as it finds one. This means that if you have two consecutive return statements, the second one will never be executed. Still, having multiple return statements can be useful if they are inside conditionals. Add this function inside your functions.php file: function loginMessage() { if (isset($_COOKIE['username'])) { return "You are " . $_COOKIE['username']; } else { return "You are not authenticated."; } } Let's use it in your index.php file by replacing the highlighted content—note that to save some tees, I replaced most of the code that was not changed at all with //…: //... <body> <p><?php echo loginMessage(); ?></p> <?php if (isset($_GET['title']) && isset($_GET['author'])): ?> //... Additionally, you can omit the return statement if you do not want the function to return anything. In this case, the function will end once it reaches the end of the block of code. Type hinting and return types With the release of PHP7, the language allows developers to be more specific about what functions get and return. You can—always optionally—specify the type of argument that the function needs, for example, type hinting, and the type of result the function will return—return type. Let's first see an example: <?php declare(strict_types=1); function addNumbers(int $a, int $b, bool $printSum): int { $sum = $a + $b; if ($printSum) { echo 'The sum is ' . $sum; } return $sum; } addNumbers(1, 2, true); addNumbers(1, '2', true); // it fails when strict_types is 1 addNumbers(1, 'something', true); // it always fails This function states that the arguments need to be an integer, and Boolean, and that the result will be an integer. Now, you know that PHP has type juggling, so it can usually transform a value of one type to its equivalent value of another type, for example, the string 2 can be used as integer 2. To stop PHP from using type juggling with the arguments and results of functions, you can declare the strict_types directive as shown in the first highlighted line. This directive has to be declared on the top of each file, where you want to enforce this behavior. The three invocations work as follows: The first invocation sends two integers and a Boolean, which is what the function expects. So, regardless of the value of strict_types, it will always work. The second invocation sends an integer, a string, and a Boolean. The string has a valid integer value, so if PHP was allowed to use type juggling, the invocation would resolve to just normal. But in this example, it will fail because of the declaration on top of the file. The third invocation will always fail as the something string cannot be transformed into a valid integer. Let's try to use a function within our project. In our index.php file, we have a foreach loop that iterates the books and prints them. The code inside the loop is kind of hard to understand as it is mixing HTML with PHP, and there is a conditional too. Let's try to abstract the logic inside the loop into a function. First, create the new functions.php file with the following content: <?php function printableTitle(array $book): string { $result = '<i>' . $book['title'] . '</i> - ' . $book['author']; if (!$book['available']) { $result .= ' <b>Not available</b>'; } return $result; } This file will contain our functions. The first one, printableTitle, takes an array representing a book and builds a string with a nice representation of the book in HTML. The code is the same as before, just encapsulated in a function. Now, index.php will have to include the functions.php file and then use the function inside the loop. Let's see how this is done: <?php require_once 'functions.php' ?> <!DOCTYPE html> <html lang="en"> //... ?> <ul> <?php foreach ($books as $book): ?> <li><?php echo printableTitle($book); ?> </li> <?php endforeach; ?> </ul> //... Well, now our loop looks way cleaner, right? Also, if we need to print the title of the book somewhere else, we can reuse the function instead of duplicating code! Summary In this article, we went through all the basics of procedural PHP while writing simple examples in order to practice them. You now know how to use variables and arrays with control structures and functions and how to get information from HTTP requests among others. Resources for Article: Further resources on this subject: Getting started with Modernizr using PHP IDE[article] PHP 5 Social Networking: Implementing Public Messages[article] Working with JSON in PHP jQuery[article]
Read more
  • 0
  • 0
  • 16342

article-image-article-phone-calls-send-sms-your-website-using-twilio
Packt
21 Mar 2014
9 min read
Save for later

Make phone calls and send SMS messages from your website using Twilio

Packt
21 Mar 2014
9 min read
(For more resources related to this topic, see here.) Sending a message from a website Sending messages from a website has many uses; sending notifications to users is one good example. In this example, we're going to present you with a form where you can enter a phone number and message and send it to your user. This can be quickly adapted for other uses. Getting ready The complete source code for this recipe can be found in the Chapter6/Recipe1/ folder. How to do it... Ok, let's learn how to send an SMS message from a website. The user will be prompted to fill out a form that will send the SMS message to the phone number entered in the form. Download the Twilio Helper Library from https://github.com/twilio/twilio-php/zipball/master and unzip it. Upload the Services/ folder to your website. Upload config.php to your website and make sure the following variables are set: <?php $accountsid = ''; // YOUR TWILIO ACCOUNT SID $authtoken = ''; // YOUR TWILIO AUTH TOKEN $fromNumber = ''; // PHONE NUMBER CALLS WILL COME FROM ?> Upload a file called sms.php and add the following code to it: <!DOCTYPE html> <html> <head> <title>Recipe 1 – Chapter 6</title> </head> <body> <?php include('Services/Twilio.php'); include("config.php"); include("functions.php"); $client = new Services_Twilio($accountsid, $authtoken); if( isset($_POST['number']) && isset($_POST['message']) ){ $sid = send_sms($_POST['number'],$_POST['message']); echo "Message sent to {$_POST['number']}"; } ?> <form method="post"> <input type="text" name="number" placeholder="Phone Number...." /><br /> <input type="text" name="message" placeholder="Message...." /><br /> <button type="submit">Send Message</button> </form> </body> </html> Create a file called functions.php and add the following code to it: <?php function send_sms($number,$message){ global $client,$fromNumber; $sms = $client->account->sms_messages->create( $fromNumber, $number, $message ); return $sms->sid; } How it works... In steps 1 and 2, we downloaded and installed the Twilio Helper Library for PHP. This library is the heart of your Twilio-powered apps. In step 3, we uploaded config.php that contains our authentication information to talk to Twilio's API. In steps 4 and 5, we created sms.php and functions.php, which will send a message to the phone number we enter. The send_sms function is handy for initiating SMS conversations; we'll be building on this function heavily in the rest of the article. Allowing users to make calls from their call logs We're going to give your user a place to view their call log. We will display a list of incoming calls and give them the option to call back on these numbers. Getting ready The complete source code for this recipe can be found in the Chapter9/Recipe4 folder. How to do it... Now, let's build a section for our users to log in to using the following steps: Update a file called index.php with the following content: <?php session_start(); include 'Services/Twilio.php'; require("system/jolt.php"); require("system/pdo.class.php"); require("system/functions.php"); $_GET['route'] = isset($_GET['route']) ? '/'.$_GET['route'] : '/'; $app = new Jolt('site',false); $app->option('source', 'config.ini'); #$pdo = Db::singleton(); $mysiteURL = $app->option('site.url'); $app->condition('signed_in', function () use ($app) { $app->redirect( $app->getBaseUri().'/login',!$app->store('user')); }); $app->get('/login', function() use ($app){ $app->render( 'login', array(),'layout' ); }); $app->post('/login', function() use ($app){ $sql = "SELECT * FROM `user` WHERE `email`='{$_POST['user']}' AND `password`='{$_POST['pass']}'"; $pdo = Db::singleton(); $res = $pdo->query( $sql ); $user = $res->fetch(); if( isset($user['ID']) ){ $_SESSION['uid'] = $user['ID']; $app->store('user',$user['ID']); $app->redirect( $app->getBaseUri().'/home'); }else{ $app->redirect( $app->getBaseUri().'/login'); } }); $app->get('/signup', function() use ($app){ $app->render( 'register', array(),'layout' ); }); $app->post('/signup', function() use ($app){ $client = new Services_Twilio($app->store('twilio.accountsid'), $app->store('twilio.authtoken') ); extract($_POST); $timestamp = strtotime( $timestamp ); $subaccount = $client->accounts->create(array( "FriendlyName" => $email )); $sid = $subaccount->sid; $token = $subaccount->auth_token; $sql = "INSERT INTO 'user' SET `name`='{$name}',`email`='{$email }',`password`='{$password}',`phone_number`='{$phone_number}',`sid` ='{$sid}',`token`='{$token}',`status`=1"; $pdo = Db::singleton(); $pdo->exec($sql); $uid = $pdo->lastInsertId(); $app->store('user',$uid ); // log user in $app->redirect( $app->getBaseUri().'/phone-number'); }); $app->get('/phone-number', function() use ($app){ $app->condition('signed_in'); $user = $app->store('user'); $client = new Services_Twilio($user['sid'], $user['token']); $app->render('phone-number'); }); $app->post("search", function() use ($app){ $app->condition('signed_in'); $user = get_user( $app->store('user') ); $client = new Services_Twilio($user['sid'], $user['token']); $SearchParams = array(); $SearchParams['InPostalCode'] = !empty($_POST['postal_code']) ? trim($_POST['postal_code']) : ''; $SearchParams['NearNumber'] = !empty($_POST['near_number']) ? trim($_POST['near_number']) : ''; $SearchParams['Contains'] = !empty($_POST['contains'])? trim($_ POST['contains']) : '' ; try { $numbers = $client->account->available_phone_numbers->getList('US', 'Local', $SearchParams); if(empty($numbers)) { $err = urlencode("We didn't find any phone numbers by that search"); $app->redirect( $app->getBaseUri().'/phone-number?msg='.$err); exit(0); } } catch (Exception $e) { $err = urlencode("Error processing search: {$e->getMessage()}"); $app->redirect( $app->getBaseUri().'/phone-number?msg='.$err); exit(0); } $app->render('search',array('numbers'=>$numbers)); }); $app->post("buy", function() use ($app){ $app->condition('signed_in'); $user = get_user( $app->store('user') ); $client = new Services_Twilio($user['sid'], $user['token']); $PhoneNumber = $_POST['PhoneNumber']; try { $number = $client->account->incoming_phone_numbers->create(array( 'PhoneNumber' => $PhoneNumber )); $phsid = $number->sid; if ( !empty($phsid) ){ $sql = "INSERT INTO numbers (user_id,number,sid) VALUES('{$u ser['ID']}','{$PhoneNumber}','{$phsid}');"; $pdo = Db::singleton(); $pdo->exec($sql); $fid = $pdo->lastInsertId(); $ret = editNumber($phsid,array( "FriendlyName"=>$PhoneNumber, "VoiceUrl" => $mysiteURL."/voice?id=".$fid, "VoiceMethod" => "POST", ),$user['sid'], $user['token']); } } catch (Exception $e) { $err = urlencode("Error purchasing number: {$e->getMessage()}"); $app->redirect( $app->getBaseUri().'/phone-number?msg='.$err); exit(0); } $msg = urlencode("Thank you for purchasing $PhoneNumber"); header("Location: index.php?msg=$msg"); $app->redirect( $app->getBaseUri().'/home?msg='.$msg); exit(0); }); $app->route('/voice', function() use ($app){ }); $app->get('/transcribe', function() use ($app){ }); $app->get('/logout', function() use ($app){ $app->store('user',0); $app->redirect( $app->getBaseUri().'/login'); }); $app->get('/home', function() use ($app){ $app->condition('signed_in'); $uid = $app->store('user'); $user = get_user( $uid ); $client = new Services_Twilio($user['sid'], $user['token']); $app->render('dashboard',array( 'user'=>$user, 'client'=>$client )); }); $app->get('/delete', function() use ($app){ $app->condition('signed_in'); }); $app->get('/', function() use ($app){ $app->render( 'home' ); }); $app->listen(); Upload a file called dashboard.php with the following content to your views folder: <h2>My Number</h2> <?php $pdo = Db::singleton(); $sql = "SELECT * FROM `numbers` WHERE `user_ id`='{$user['ID']}'"; $res = $pdo->query( $sql ); while( $row = $res->fetch() ){ echo preg_replace("/[^0-9]/", "", $row['number']); } try { ?> <h2>My Call History</h2> <p>Here are a list of recent calls, you can click any number to call them back, we will call your registered phone number and then the caller</p> <table width=100% class="table table-hover tabled-striped"> <thead> <tr> <th>From</th> <th>To</th> <th>Start Date</th> <th>End Date</th> <th>Duration</th> </tr> </thead> <tbody> <?php foreach ($client->account->calls as $call) { # echo "<p>Call from $call->from to $call->to at $call->start_time of length $call->duration</p>"; if( !stristr($call->direction,'inbound') ) continue; $type = find_in_list($call->from); ?> <tr> <td><a href="<?=$uri?>/call?number=<?=urlencode($call->from)?>"><?=$call->from?></a></td> <td><?=$call->to?></td> <td><?=$call->start_time?></td> <td><?=$call->end_time?></td> <td><?=$call->duration?></td> </tr> <?php } ?> </tbody> </table> <?php } catch (Exception $e) { echo 'Error: ' . $e->getMessage(); } ?> <hr /> <a href="<?=$uri?>/delete" onclick="return confirm('Are you sure you wish to close your account?');">Delete My Account</a> How it works... In step 1, we updated the index.php file. In step 2, we uploaded dashboard.php to the views folder. This file checks if we're logged in using the $app->condition('signed_in') method, which we discussed earlier, and if we are, it displays all incoming calls we've had to our account. We can then push a button to call one of those numbers and whitelist or blacklist them. Summary Thus in this article we have learned how to send messages and make phone calls from your website using Twilio. Resources for Article: Further resources on this subject: Make phone calls, send SMS from your website using Twilio [article] Trunks in FreePBX 2.5 [article] Trunks using 3CX: Part 1 [article]
Read more
  • 0
  • 0
  • 16111

article-image-websockets-wildfly
Packt
30 Dec 2014
22 min read
Save for later

WebSockets in Wildfly

Packt
30 Dec 2014
22 min read
In this article by the authors, Michał Ćmil and Michał Matłoka, of Java EE 7 Development with WildFly, we will cover WebSockets and how they are one of the biggest additions in Java EE 7. In this article, we will explore the new possibilities that they provide to a developer. In our ticket booking applications, we already used a wide variety of approaches to inform the clients about events occurring on the server side. These include the following: JSF polling Java Messaging Service (JMS) messages REST requests Remote EJB requests All of them, besides JMS, were based on the assumption that the client will be responsible for asking the server about the state of the application. In some cases, such as checking if someone else has not booked a ticket during our interaction with the application, this is a wasteful strategy; the server is in the position to inform clients when it is needed. What's more, it feels like the developer must hack the HTTP protocol to get a notification from a server to the client. This is a requirement that has to be implemented in most nontrivial web applications, and therefore, deserves a standardized solution that can be applied by the developers in multiple projects without much effort. WebSockets are changing the game for developers. They replace the request-response paradigm in which the client always initiates the communication with a two-point bidirectional messaging system. After the initial connection, both sides can send independent messages to each other as long as the session is alive. This means that we can easily create web applications that will automatically refresh their state with up-to-date data from the server. You probably have already seen this kind of behavior in Google Docs or live broadcasts on news sites. Now we can achieve the same effect in a simpler and more efficient way than in earlier versions of Java Enterprise Edition. In this article, we will try to leverage these new, exciting features that come with WebSockets in Java EE 7 thanks to JSR 356 (https://jcp.org/en/jsr/detail?id=356) and HTML5. In this article, you will learn the following topics: How WebSockets work How to create a WebSocket endpoint in Java EE 7 How to create an HTML5/AngularJS client that will accept push notifications from an application deployed on WildFly (For more resources related to this topic, see here.) An overview of WebSockets A WebSocket session between the client and server is built upon a standard TCP connection. Although the WebSocket protocol has its own control frames (mainly to create and sustain the connection) coded by the Internet Engineering Task Force in the RFC 6455 (http://tools.ietf.org/html/rfc6455), whose peers are not obliged to use any specific format to exchange application data. You may use plaintext, XML, JSON, or anything else to transmit your data. As you probably remember, this is quite different from SOAP-based WebServices, which had bloated specifications of the exchange protocol. The same goes for RESTful architectures; we no longer have the predefined verb methods from HTTP (GET, PUT, POST, and DELETE), status codes, and the whole semantics of an HTTP request. This liberty means that WebSockets are pretty low level compared to the technologies that we used up to this point, but thanks to this, the communication overhead is minimal. The protocol is less verbose than SOAP or RESTful HTTP, which allows us to achieve higher performance. This, however, comes with a price. We usually like to use the features of higher-level protocols (such as horizontal scaling and rich URL semantics), and with WebSockets, we would need to write them by hand. For standard CRUD-like operations, it would be easier to use a REST endpoint than create everything from scratch. What do we get from WebSockets compared to the standard HTTP communication? First of all, a direct connection between two peers. Normally, when you connect to a web server (which can, for instance, handle a REST endpoint), every subsequent call is a new TCP connection, and your machine is treated like it is a different one every time you make a request. You can, of course, simulate a stateful behavior (so that the server would recognize your machine between different requests) using cookies and increase the performance by reusing the same connection in a short period of time for a specific client, but basically, it is a workaround to overcome the limitations of the HTTP protocol. Once you establish a WebSocket connection between a server and client, you can use the same session (and underlying TCP connection) during the whole communication. Both sides are aware of it, and can send data independently in a full-duplex manner (both sides can send and receive data simultaneously). Using plain HTTP, there is no way for the server to spontaneously start sending data to the client without any request from its side. What's more, the server is aware of all of its WebSocket clients connected, and can even send data between them! The current solution that includes trying to simulate real-time data delivery using HTTP protocol can put a lot of stress on the web server. Polling (asking the server about updates), long polling (delaying the completion of a request to the moment when an update is ready), and streaming (a Comet-based solution with a constantly open HTTP response) are all ways to hack the protocol to do things that it wasn't designed for and have their own limitations. Thanks to the elimination of unnecessary checks, WebSockets can heavily reduce the number of HTTP requests that have to be handled by the web server. The updates are delivered to the user with a smaller latency because we only need one round-trip through the network to get the desired information (it is pushed by the server immediately). All of these features make WebSockets a great addition to the Java EE platform, which fills the gaps needed to easily finish specific tasks, such as sending updates, notifications, and orchestrating multiple client interactions. Despite these advantages, WebSockets are not intended to replace REST or SOAP WebServices. They do not scale so well horizontally (they are hard to distribute because of their stateful nature), and they lack most of the features that are utilized in web applications. URL semantics, complex security, compression, and many other features are still better realized using other technologies. How does WebSockets work To initiate a WebSocket session, the client must send an HTTP request with an upgraded, WebSocket header field. This informs the server that the peer client has asked the server to switch to the WebSocket protocol. You may notice that the same happens in WildFly for Remote EJBs; the initial connection is made using an HTTP request, and is later switched to the remote protocol thanks to the Upgrade mechanism. The standard Upgrade header field can be used to handle any protocol, other than HTTP, which is accepted by both sides (the client and server). In WildFly, this allows to reuse the HTTP port (80/8080) for other protocols and therefore, minimise the number of required ports that should be configured. If the server can understand the WebSocket protocol, the client and server then proceed with the handshaking phase. They negotiate the version of the protocol, exchange security keys, and if everything goes well, the peers can go to the data transfer phase. From now on, the communication is only done using the WebSocket protocol. It is not possible to exchange any HTTP frames using the current connection. The whole life cycle of a connection can be summarized in the following diagram: A sample HTTP request from a JavaScript application to a WildFly server would look similar to this: GET /ticket-agency-websockets/tickets HTTP/1.1 Upgrade: websocket Connection: Upgrade Host: localhost:8080 Origin: http://localhost:8080 Pragma: no-cache Cache-Control: no-cache Sec-WebSocket-Key: TrjgyVjzLK4Lt5s8GzlFhA== Sec-WebSocket-Version: 13 Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits, x-webkit-deflate-frame User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.116 Safari/537.36 Cookie: [45 bytes were stripped] We can see that the client requests an upgrade connection with WebSocket as the target protocol on the URL /ticket-agency-websockets/tickets. It additionally passes information about the requested version and key. If the server supports the request protocol and all the required data is passed by the client, then it would respond with the following frame: HTTP/1.1 101 Switching Protocols X-Powered-By: Undertow 1 Server: Wildfly 8 Origin: http://localhost:8080 Upgrade: WebSocket Sec-WebSocket-Accept: ZEAab1TcSQCmv8RsLHg4RL/TpHw= Date: Sun, 13 Apr 2014 17:04:00 GMT Connection: Upgrade Sec-WebSocket-Location: ws://localhost:8080/ticket-agency-websockets/tickets Content-Length: 0 The status code of the response is 101 (switching protocols) and we can see that the server is now going to start using the WebSocket protocol. The TCP connection initially used for the HTTP request is now the base of the WebSocket session and can be used for transmissions. If the client tries to access a URL, which is only handled by another protocol, then the server can ask the client to do an upgrade request. The server uses the 426 (upgrade required) status code in such cases. The initial connection creation has some overhead (because of the HTTP frames that are exchanged between the peers), but after it is completed, new messages have only 2 bytes of additional headers. This means that when we have a large number of small messages, WebSocket will be an order of magnitude faster than REST protocols simply because there is less data to transmit! If you are wondering about the browser support of WebSockets, you can look it up at http://caniuse.com/websockets. All new versions of major browsers currently support WebSockets; the total coverage is estimated (at the time of writing) at 74 percent. You can see it in the following screenshot: After this theoretical introduction, we are ready to jump into action. We can now create our first WebSocket endpoint! Creating our first endpoint Let's start with a simple example: package com.packtpub.wflydevelopment.chapter8.boundary; import javax.websocket.EndpointConfig; import javax.websocket.OnOpen; import javax.websocket.Session; import javax.websocket.server.ServerEndpoint; import java.io.IOException; @ServerEndpoint("/hello") public class HelloEndpoint {    @OnOpen    public void open(Session session, EndpointConfig conf) throws IOException {        session.getBasicRemote().sendText("Hi!");    } } Java EE 7 specification has taken into account developer friendliness, which can be clearly seen in the given example. In order to define your WebSocket endpoint, you just need a few annotations on a Plain Old Java Object (POJO). The first POJO @ServerEndpoint("/hello") defines a path to your endpoint. It's a good time to discuss the endpoint's full address. We placed this sample in the application named ticket-agency-websockets. During the deployment of application, you can spot information in the WildFly log about endpoints creation, as shown in the following command line: 02:21:35,182 INFO [io.undertow.websockets.jsr] (MSC service thread 1-7)UT026003: Adding annotated server endpoint class com.packtpub.wflydevelopment.chapter8.boundary.FirstEndpoint for path /hello 02:21:35,401 INFO [org.jboss.resteasy.spi.ResteasyDeployment](MSC service thread 1-7) Deploying javax.ws.rs.core.Application: classcom.packtpub.wflydevelopment.chapter8.webservice.JaxRsActivator$Proxy$_$$_WeldClientProxy 02:21:35,437 INFO [org.wildfly.extension.undertow](MSC service thread 1-7) JBAS017534: Registered web context:/ticket-agency-websockets The full URL of the endpoint is ws://localhost:8080/ticket-agency-websockets/hello, which is just a concatenation of the server and application address with an endpoint path on an appropriate protocol. The second used annotation @OnOpen defines the endpoint behavior when the connection from the client is opened. It's not the only behavior-related annotation of the WebSocket endpoint. Let's look to the following table: Annotation Description @OnOpen Connection is open. With this annotation, we can use the Session and EndpointConfig parameters. The first parameter represents the connection to the user and allows further communication. The second one provides some client-related information. @OnMessage This annotation is executed when a message from the client is being received. In such a method, you can just have Session and for example, the String parameter, where the String parameter represents the received message. @OnError There are bad times when some errors occur. With this annotation, you can retrieve a Throwable object apart from standard Session. @OnClose When the connection is closed, it is possible to get some data concerning this event in the form of the CloseReason type object. There is one more interesting line in our HelloEndpoint. Using the Session object, it is possible to communicate with the client. This clearly shows that in WebSockets, two-directional communication is easily possible. In this example, we decided to respond to a connected user synchronously (getBasicRemote()) with just a text message Hi! (sendText (String)). Of course, it's also possible to communicate asynchronously and send, for example, sending binary messages using your own binary bandwidth saving protocol. We will present some of these processes in the next example. Expanding our client application It's time to show how you can leverage the WebSocket features in real life. We created the ticket booking application based on the REST API and AngularJS framework. It was clearly missing one important feature; the application did not show information concerning ticket purchases of other users. This is a perfect use case for WebSockets! Since we're just adding a feature to our previous app, we will describe the changes we will introduce to it. In this example, we would like to be able to inform all current users about other purchases. This means that we have to store information about active sessions. Let's start with the registry type object, which will serve this purpose. We can use a Singleton session bean for this task, as shown in the following code: @Singleton public class SessionRegistry {    private final Set<Session> sessions = new HashSet<>();    @Lock(LockType.READ)    public Set<Session> getAll() {        return Collections.unmodifiableSet(sessions);    }    @Lock(LockType.WRITE)    public void add(Session session) {        sessions.add(session);    }    @Lock(LockType.WRITE)    public void remove(Session session) {        sessions.remove(session);    } } We could use Collections.synchronizedSet from standard Java libraries but it's a great chance to remember what we described earlier about container-based concurrency. In SessionRegistry, we defined some basic methods to add, get, and remove sessions. For the sake of collection thread safety during retrieval, we return an unmodifiable view. We defined the registry, so now we can move to the endpoint definition. We will need a POJO, which will use our newly defined registry as shown: @ServerEndpoint("/tickets") public class TicketEndpoint {    @Inject    private SessionRegistry sessionRegistry;    @OnOpen    public void open(Session session, EndpointConfig conf) {        sessionRegistry.add(session);    }    @OnClose    public void close(Session session, CloseReason reason) {        sessionRegistry.remove(session);    }    public void send(@Observes Seat seat) {        sessionRegistry.getAll().forEach(session -> session.getAsyncRemote().sendText(toJson(seat)));    }    private String toJson(Seat seat) {        final JsonObject jsonObject = Json.createObjectBuilder()                .add("id", seat.getId())                .add("booked", seat.isBooked())                .build();        return jsonObject.toString();    } } Our endpoint is defined in the /tickets address. We injected a SessionRepository to our endpoint. During @OnOpen, we add Sessions to the registry, and during @OnClose, we just remove them. Message sending is performed on the CDI event (the @Observers annotation), which is already fired in our code during TheatreBox.buyTicket(int). In our send method, we retrieve all sessions from SessionRepository, and for each of them, we asynchronously send information about booked seats. We don't really need information about all the Seat fields to realize this feature. That's the reason why we don't use the automatic JSON serialization here. Instead, we decided to use a minimalistic JSON object, which provides only the required data. To do this, we used the new Java API for JSON Processing (JSR-353). Using a fluent-like API, we're able to create a JSON object and add two fields to it. Then, we just convert JSON to the String, which is sent in a text message. Because in our example we send messages in response to a CDI event, we don't have (in the event handler) an out-of-the-box reference to any of the sessions. We have to use our sessionRegistry object to access the active ones. However, if we would like to do the same thing but, for example, in the @OnMessage method, then it is possible to get all active sessions just by executing the session.getOpenSessions() method. These are all the changes required to perform on the backend side. Now, we have to modify our AngularJS frontend to leverage the added feature. The good news is that JavaScript already includes classes that can be used to perform WebSocket communication! There are a few lines of code we have to add inside the module defined in the seat.js file, which are as follows: var ws = new WebSocket("ws://localhost:8080/ticket-agency-websockets/tickets"); ws.onmessage = function (message) {    var receivedData = message.data;    var bookedSeat = JSON.parse(receivedData);    $scope.$apply(function () {        for (var i = 0; i < $scope.seats.length; i++) {           if ($scope.seats[i].id === bookedSeat.id) {                $scope.seats[i].booked = bookedSeat.booked;                break;            }        }    }); }; The code is very simple. We just create the WebSocket object using the URL to our endpoint, and then we define the onmessage function in that object. During the function execution, the received message is automatically parsed from the JSON to JavaScript object. Then, in $scope.$apply, we just iterate through our seats, and if the ID matches, we update the booked state. We have to use $scope.$apply because we are touching an Angular object from outside the Angular world (the onmessage function). Modifications performed on $scope.seats are automatically visible on the website. With this, we can just open our ticket booking website in two browser sessions, and see that when one user buys a ticket, the second users sees almost instantly that the seat state is changed to booked. We can enhance our application a little to inform users if the WebSocket connection is really working. Let's just define onopen and onclose functions for this purpose: ws.onopen = function (event) {    $scope.$apply(function () {        $scope.alerts.push({            type: 'info',            msg: 'Push connection from server is working'        });    }); }; ws.onclose = function (event) {    $scope.$apply(function () {        $scope.alerts.push({            type: 'warning',            msg: 'Error on push connection from server '        });    }); }; To inform users about a connection's state, we push different types of alerts. Of course, again we're touching the Angular world from the outside, so we have to perform all operations on Angular from the $scope.$apply function. Running the described code results in the notification, which is visible in the following screenshot: However, if the server fails after opening the website, you might get an error as shown in the following screenshot: Transforming POJOs to JSON In our current example, we transformed our Seat object to JSON manually. Normally, we don't want to do it this way; there are many libraries that will do the transformation for us. One of them is GSON from Google. Additionally, we can register an encoder/decoder class for a WebSocket endpoint that will do the transformation automatically. Let's look at how we can refactor our current solution to use an encoder. First of all, we must add GSON to our classpath. The required Maven dependency is as follows: <dependency>    <groupId>com.google.code.gson</groupId>    <artifactId>gson</artifactId>    <version>2.3</version> </dependency> Next, we need to provide an implementation of the javax.websocket.Encoder.Text interface. There are also versions of the javax.websocket.Encoder.Text interface for binary and streamed data (for both binary and text formats). A corresponding hierarchy of interfaces is also available for decoders (javax.websocket.Decoder). Our implementation is rather simple. This is shown in the following code snippet: public class JSONEncoder implements Encoder.Text<Object> {    private Gson gson;    @Override    public void init(EndpointConfig config) {        gson = new Gson(); [1]    }    @Override    public void destroy() {        // do nothing    }    @Override    public String encode(Object object) throws EncodeException {        return gson.toJson(object); [2]    } } First, we create an instance of GSON in the init method; this action will be executed when the endpoint is created. Next, in the encode method, which is called every time, we send an object through an endpoint. We use JSON to create JSON from an object. This is quite concise when we think how reusable this little class is. If you want more control on the JSON generation process, you can use the GsonBuilder class to configure the GSON object before creation of the GsonBuilder class. We have the encoder in place. Now it's time to alter our endpoint: @ServerEndpoint(value = "/tickets", encoders={JSONEncoder.class})[1] public class TicketEndpoint {    @Inject    private SessionRegistry sessionRegistry;    @OnOpen    public void open(Session session, EndpointConfig conf) {        sessionRegistry.add(session);    }    @OnClose    public void close(Session session, CloseReason reason) {        sessionRegistry.remove(session);    }    public void send(@Observes Seat seat) {        sessionRegistry.getAll().forEach(session -> session.getAsyncRemote().sendObject(seat)); [2]    } } The first change is done on the @ServerEndpoint annotation. We have to define a list of supported encoders; we simply pass our JSONEncoder.class wrapped in an array. Additionally, we have to pass the endpoint name using the value attribute. Earlier, we used the sendText method to pass a string containing a manually created JSON. Now, we want to send an object and let the encoder handle the JSON generation; therefore, we'll use the getAsyncRemote().sendObject() method. That's all! Our endpoint is ready to be used. It will work the same as the earlier version, but now our objects will be fully serialized to JSON, so they will contain every field, not only IDs and be booked. After deploying the server, you can connect to the WebSocket endpoint using one of the Chrome extensions, for instance, the Dark WebSocket terminal from the Chrome store (use the ws://localhost:8080/ticket-agency-websockets/tickets address). When you book tickets using the web application, the WebSocket terminal should show something similar to the output shown in the following screenshot: Of course, it is possible to use different formats other than JSON. If you want to achieve better performance (when it comes to the serialization time and payload size), you may want to try out binary serializers such as Kryo (https://github.com/EsotericSoftware/kryo). They may not be supported by JavaScript, but may come in handy if you would like to use WebSockets for other clients also. Tyrus (https://tyrus.java.net/) is a reference implementation of the WebSocket standard for Java; you can use it in your standalone desktop applications. In that case, besides the encoder (which is used to send messages), you would also need to create a decoder, which can automatically transform incoming messages. An alternative to WebSockets The example we presented in this article is possible to be implemented using an older, lesser-known technology named Server-Sent Events (SSE). SSE allows for one-way communication from the server to client over HTTP. It is much simpler than WebSockets but has a built-in support for things such as automatic reconnection and event identifiers. WebSockets are definitely more powerful, but are not the only way to pass events, so when you need to implement some notifications from the server side, remember about SSE. Another option is to explore the mechanisms oriented around the Comet techniques. Multiple implementations are available and most of them use different methods of transportation to achieve their goals. A comprehensive comparison is available at http://cometdaily.com/maturity.html. Summary In this article, we managed to introduce the new low-level type of communication. We presented how it works underneath and compares to SOAP and REST introduced earlier. We also discussed how the new approach changes the development of web applications. Our ticket booking application was further enhanced to show users the changing state of the seats using push-like notifications. The new additions required very little code changes in our existing project when we take into account how much we are able to achieve with them. The fluent integration of WebSockets from Java EE 7 with the AngularJS application is another great showcase of flexibility, which comes with the new version of the Java EE platform. Resources for Article: Further resources on this subject: Various subsystem configurations [Article] Running our first web application [Article] Creating Java EE Applications [Article]
Read more
  • 0
  • 0
  • 15855
article-image-caching-symfony
Packt
05 Apr 2016
15 min read
Save for later

Caching in Symfony

Packt
05 Apr 2016
15 min read
In this article by Sohail Salehi, author of the book, Mastering Symfony, we are going to discuss performance improvement using cache. Caching is a vast subject and needs its own book to be covered properly. However, in our Symfony project, we are interested in two types of caches only: Application cache Database cache We will see what caching facilities are provided in Symfony by default and how we can use them. We are going to apply the caching techniques on some methods in our projects and watch the performance improvement. By the end of this article, you will have a firm understanding about the usage of HTTP cache headers in the application layer and caching libraries. (For more resources related to this topic, see here.) Definition of cache Cache is a temporary place that stores contents that can be served faster when they are needed. Considering that we already have a permanent place on disk to store our web contents (templates, codes, and database tables), cache sounds like a duplicate storage. That is exactly what they are. They are duplicates and we need them because, in return for consuming an extra space to store the same data, they provide a very fast response to some requests. So this is a very good trade-off between storage and performance. To give you an example about how good this deal can be, consider the following image. On the left side, we have a usual client/server request/response model and let's say the response latency is two seconds and there are only 100 users who hit the same content per hour: On the right side, however, we have a cache layer that sits between the client and server. What it does basically is receive the same request and pass it to the server. The server sends a response to the cache and, because this response is new to the cache, it will save a copy (duplicate) of the response and then pass it back to the client. The latency is 2 + 0.2 seconds. However, it doesn't add up, does it? The purpose of using cache was to improve the overall performance and reduce the latency. It has already added more delays to the cycle. With this result, how could it possibly be beneficial? The answer is in the following image: Now, with the response being cached, imagine the same request comes through. (We have about 100 requests/hour for the same content, remember?) This time, the cache layer looks into its space, finds the response, and sends it back to the client, without bothering the server. The latency is 0.2 seconds. Of course, these are only imaginary numbers and situations. However, in the simplest form, this is how cache works. It might not be very helpful on a low traffic website; however, when we are dealing with thousands of concurrent users on a high traffic website, then we can appreciate the value of caching. So, according to the previous images, we can define some terminology and use them in this article as we continue. In the first image, when a client asked for that page, it wasn't exited and the cache layer had to store a copy of its contents for the future references. This is called Cache Miss. However, in the second image, we already had a copy of the contents stored in the cache and we benefited from it. This is called Cache Hit. Characteristics of a good cache If you do a quick search, you will find that a good cache is defined as the one which misses only once. In other words, this cache miss happens only if the content has not been requested before. This feature is necessary but it is not sufficient. To clarify the situation a little bit, let's add two more terminology here. A cache can be in one of the following states: fresh (has the same contents as the original response) and stale (has the old response's contents that have now changed on the server). The important question here is for how long should a cache be kept? We have the power to define the freshness of a cache via a setting expiration period. We will see how to do this in the coming sections. However, just because we have this power doesn't mean that we are right about the content's freshness. Consider the situation shown in the following image: If we cache a content for a long time, cache miss won't happen again (which satisfies the preceding definition), but the content might lose its freshness according to the dynamic resources that might change on the server. To give you an example, nobody likes to read the news of three months ago when they open the BBC website. Now, we can modify the definition of a good cache as follows: A cache strategy is considered to be good if cache miss for the same content happens only once, while the cached contents are still fresh. This means that defining the cache expiry time won't be enough and we need another strategy to keep an eye on cache freshness. This happens via a cache validation strategy. When the server sends a response, we can set the validation rules on the basis of what really matters on the server side, and this way, we can keep the contents stored in the cache fresh, as shown in the following image. We will see how to do this in Symfony soon. Caches in a Symfony project In this article, we will focus on two types of caches: The gateway cache (which is called reverse proxy cache as well) and doctrine cache. As you might have guessed, the gateway cache deals with all of the HTTP cache headers. Symfony comes with a very strong gateway cache out of the box. All you need to do is just activate it in your front controller then start defining your cache expiration and validation strategies inside your controllers. That said, it does not mean that you are forced or restrained to use the Symfony cache only. If you prefer other reverse proxy cache libraries (that is, Varnish or Django), you are welcome to use them. The caching configurations in Symfony are transparent such that you don't need to change a single line inside your controllers when you change your caching libraries. Just modify your config.yml file and you will be good to go. However, we all know that caching is not for application layers and views only. Sometimes, we need to cache any database-related contents as well. For our Doctrine ORM, this includes metadata cache, query cache, and result cache. Doctrine comes with its own bundle to handle these types of caches and it uses a wide range of libraries (APC, Memcached, Redis, and so on) to do the job. Again, we don't need to install anything to use this cache bundle. If we have Doctrine installed already, all we need to do is configure something and then all the Doctrine caching power will be at our disposal. Putting these two caching types together, we will have a big picture to cache our Symfony project: As you can see in this image, we might have a problem with the final cached page. Imagine that we have a static page that might change once a week, and in this page, there are some blocks that might change on a daily or even hourly basis, as shown in the following image. The User dashboard in our project is a good example. Thus, if we set the expiration on the gateway cache to one week, we cannot reflect all of those rapid updates in our project and task controllers. To solve this problem, we can leverage from Edge Side Includes (ESI) inside Symfony. Basically, any part of the page that has been defined inside an ESI tag can tell its own cache story to the gateway cache. Thus, we can have multiple cache strategies living side by side inside a single page. With this solution, our big picture will look as follows: Thus, we are going to use the default Symfony and Doctrine caching features for application and model layers and you can also use some popular third-party bundles for more advanced settings. If you completely understand the caching principals, moving to other caching bundles would be like a breeze. Key players in the HTTP cache header Before diving into the Symfony application cache, let's familiarize ourselves with the elements that we need to handle in our cache strategies. To do so, open https://www.wikipedia.org/ in your browser and inspect any resource with the 304 response code and ponder on request/response headers inside the Network tab: Among the response elements, there are four cache headers that we are interested in the most: expires and cache-control, which will be used for an expiration model, and etag and last-modified, which will be used for a validation model. Apart from these cache headers, we can have variations of the same cache (compressed/uncompressed) via the Vary header and we can define a cache as private (accessible by a specific user) or public (accessible by everyone). Using the Symfony reverse proxy cache There is no complicated or lengthy procedure required to activate the Symfony's gateway cache. Just open the front controller and uncomment the following lines: // web/app.php <?php //... require_once __DIR__.'/../app/AppKernel.php'; //un comment this line require_once __DIR__.'/../app/AppCache.php'; $kernel = new AppKernel('prod', false); $kernel->loadClassCache(); // and this line $kernel = new AppCache($kernel); // ... ?> Now, the kernel is wrapped around the Application Cache layer, which means that any request coming from the client will pass through this layer first. Set the expiration for the dashboard page Log in to your project and click on the Request/Response section in the debug toolbar. Then, scroll down to Response Headers and check the contents: As you can see, only cache-control is sitting there with some default values among the cache headers that we are interested in. When you don't set any value for Cache-Control, Symfony considers the page contents as private to keep them safe. Now, let's go to the Dashboard controller and add some gateway cache settings to the indexAction() method: // src/AppBundle/Controller/DashboardController.php <?php namespace AppBundleController; use SymfonyBundleFrameworkBundleControllerController; use SymfonyComponentHttpFoundationResponse; class DashboardController extends Controller { public function indexAction() { $uId = $this->getUser()->getId(); $util = $this->get('mava_util'); $userProjects = $util->getUserProjects($uId); $currentTasks= $util->getUserTasks($uId, 'in progress'); $response = new Response(); $date = new DateTime('+2 days'); $response->setExpires($date); return $this->render( 'CoreBundle:Dashboard:index.html.twig', array( 'currentTasks' => $currentTasks, 'userProjects' => $userProjects ), $response ); } } You might have noticed that we didn't change the render() method. Instead, we added the response settings as the third parameter of this method. This is a good solution because now we can keep the current template structure and adding new settings won't require any other changes in the code. However, you might wonder what other options do we have? We can save the whole $this->render() method in a variable and assign a response setting to it as follows: // src/AppBundle/Controller/DashboardController.php <?php // ... $res = $this->render( 'AppBundle:Dashboard:index.html.twig', array( 'currentTasks' => $currentTasks, 'userProjects' => $userProjects ) ); $res->setExpires($date); return $res; ?> Still looks like a lot of hard work for a simple response header setting. So let me introduce a better option. We can use the @Cache annotation as follows: // src/AppBundle/Controller/DashboardController.php <?php namespace AppBundleController; use SymfonyBundleFrameworkBundleControllerController; use SensioBundleFrameworkExtraBundleConfigurationCache; class DashboardController extends Controller { /** * @Cache(expires="next Friday") */ public function indexAction() { $uId = $this->getUser()->getId(); $util = $this->get('mava_util'); $userProjects = $util->getUserProjects($uId); $currentTasks= $util->getUserTasks($uId, 'in progress'); return $this->render( 'AppBundle:Dashboard:index.html.twig', array( 'currentTasks' => $currentTasks, 'userProjects' => $userProjects )); } } Have you noticed that the response object is completely removed from the code? With an annotation, all response headers are sent internally, which helps keep the original code clean. Now that's what I call zero-fee maintenance. Let's check our response headers in Symfony's debug toolbar and see what it looks like: The good thing about the @Cache annotation is that they can be nested. Imagine you have a controller full of actions. You want all of them to have a shared maximum age of half an hour except one that is supposed to be private and should be expired in five minutes. This sounds like a lot of code if you going are to use the response objects directly, but with an annotation, it will be as simple as this: <?php //... /** * @Cache(smaxage="1800", public="true") */ class DashboardController extends Controller { public function firstAction() { //... } public function secondAction() { //... } /** * @Cache(expires="300", public="false") */ public function lastAction() { //... } } The annotation defined before the controller class will apply to every single action, unless we explicitly add a new annotation for an action. Validation strategy In the previous example, we set the expiry period very long. This means that if a new task is assigned to the user, it won't show up in his dashboard because of the wrong caching strategy. To fix this issue, we can validate the cache before using it. There are two ways for validation: We can check the content's date via the Last-Modified header: In this technique, we certify the freshness of a content via the time it has been modified. In other words, if we keep track of the dates and times of each change on a resource, then we can simply compare that date with cache's date and find out if it is still fresh. We can use the ETag header as a unique content signature: The other solution is to generate a unique string based on the contents and evaluate the cache's freshness based on its signature. We are going to try both of them in the Dashboard controller and see them in action. Using the right validation header is totally dependent on the current code. In some actions, calculating modified dates is way easier than creating a digital footprint, while in others, going through the date and time function might looks costly. Of course, there are situations where generating both headers are critical. So creating it is totally dependent on the code base and what you are going to achieve. As you can see, we have two entities in the indexAction() method and, considering the current code, generating the ETag header looks practical. So the validation header will look as follows: // src/AppBundle/Controller/DashboardController.php <?php //... class DashboardController extends Controller { /** * @Cache(ETag="userProjects ~ finishedTasks") */ public function indexAction() { //... } } The next time a request arrives, the cache layer looks into the ETag value in the controller, compares it with its own ETag, and calls the indexAction() method; only, there is a difference between these two. How to mix expiration and validation strategies Imagine that we want to keep the cache fresh for 10 minutes and simultaneously keep an eye on any changes over user projects or finished tasks. It is obvious that tasks won't finish every 10 minutes and it is far beyond reality to expect changes on project status during this period. So what we can do to make our caching strategy efficient is that we can combine Expiration and Validation together and apply them to the Dashboard Controller as follows: // src/CoreBundle/Controller/DashboardController.php <?php //... /** * @Cache(expires="600") */ class DashboardController extends Controller { /** * @Cache(ETag="userProjects ~ finishedTasks") */ public function indexAction() { //... } } Keep in mind that Expiration has a higher priority over Validation. In other words, the cache is fresh for 10 minutes, regardless of the validation status. So when you visit your dashboard for the first time, a new cache plus a 302 response (not modified) is generated automatically and you will hit cache for the next 10 minutes. However, what happens after 10 minutes is a little different. Now, the expiration status is not satisfying; thus, the HTTP flow falls into the validation phase and in case nothing happened to the finished tasks status or the your project status, then a new expiration period is generated and you hit the cache again. However, if there is any change in your tasks or project status, then you will hit the server to get the real response, and a new cache from response's contents, new expiration period, and new ETag are generated and stored in the cache layer for future references. Summary In this article, you learned about the basics of gateway and Doctrine caching. We saw how to set expiration and validation strategies using HTTP headers such as Cache-Control, Expires, Last-Modified, and ETag. You learned how to set public and private access levels for a cache and use an annotation to define cache rules in the controller. Resources for Article: Further resources on this subject: User Interaction and Email Automation in Symfony 1.3: Part1 [article] The Symfony Framework – Installation and Configuration [article] User Interaction and Email Automation in Symfony 1.3: Part2 [article]
Read more
  • 0
  • 0
  • 15194

article-image-introduction-moodle-3
Packt
17 Jul 2017
13 min read
Save for later

Introduction to Moodle 3

Packt
17 Jul 2017
13 min read
In this article, Ian Wild, the author of the book, Moodle 3.x Developer's Guide will be intoroducing you to Moodle 3.For any organization considering implementing an online learning environment, Moodle is often the number one choice. Key to its success is the free, open source ethos that underpins it. Not only is the Moodle source code fully available to developers, but Moodle itself has been developed to allow for the inclusion of third party plugins. Everything from how users access the platform, the kinds of teaching interactions that are available, through to how attendance and success can be reported – in fact all the key Moodle functionality – can be adapted and enhanced through plugins. (For more resources related to this topic, see here.) What is Moodle? There are three reasons why Moodle has become so important, and much talked about, in the world of online learning, one technical, one philosophical and the third educational. From a technical standpoint Moodle - an acronym for Modular Object Oriented Dynamic Learning Environment – is highly customizable. As a Moodle developer, always remember that the ‘M’ in Moodle stands for modular. If you are faced with a client feature request that demands a feature Moodle doesn’t support then don’t panic. The answer is simple: we create a new custom plugin to implement it. Check out the Moodle Plugins Directory (https://moodle.org/plugins/) for a comprehensive library of supported 3rd party plugins that Moodle developers have created and given back to the community. And this leads to the philosophical reason why Moodle dominates. Free open source software for education Moodle is grounded firmly in a community-based, open source philosophy (see https://en.wikipedia.org/wiki/Open-source_model). But what does this mean for developers? Fundamentally, it means that we have complete access to the source code and, within reason, unfettered access to the people who develop it. Access to the application itself is free – you don’t need to pay to download it and you don’t need to pay to run it. But be aware of what ‘free’ means in this context. Hosting and administration, for example, take time and resources and are very unlikely to be free. As an educational tool, Moodle was developed to support social constructionism (see https://docs.moodle.org/31/en/Pedagogy) – if you are not familiar with this concept then it is essentially suggesting that building an understanding of a concept or idea can be best achieved by interacting with a broad community. The impact on us as Moodle plugin developers is that there is a highly active group of users and developers. Before you begin developing any Moodle plugins come and join us at https://moodle.org. Plugin Development – Authentication In this article, we will be developing a novel plugin that will seamlessly integrate Moodle and the WordPress content management system. Our plugin will authorize users via WordPress when they click on a link to Moodle when on a WordPress page. The plugin discussed in this article has already been released to the Moodle community – check out the Moodle Plugins Directory at https://moodle.org/plugins/auth_wordpress for details. Let us start by learning what Moodle authentication is and how new user accounts are created. Authentication Moodle supports a range of different authentication methods out of the box, each one supported by its own plugin. To go to the list of available plugins, from the Administration block, click on Site administration, click Plugins, then click Authentication, and finally click on Manage authentication. The list of currently installed authentication plugins is displayed: Each plugin interfaces with an internal Application Programming Interface (API), the Access API – see the Moodle developer documentation for details here: https://docs.moodle.org/dev/Access_API Getting logged in There are two ways of prompting the Moodle authentication process: Attempting to log in from the log in page. Clicking on a link to a protected resource (i.e. a page or file that you can’t view or download without logging in). For an overview of the process, take a look in the developer documentation at https://docs.moodle.org/dev/Authentication_plugins#Overview_of_Moodle_authentication_process. After checks to determine if an upgrade is required (or if we are partway through the upgrade process), there is a short fragment of code that loads the configured authentication plugins and for each one calls a special method called loginpage_hook(): $authsequence = get_enabled_auth_plugins(true); // auths, in sequence foreach($authsequence as $authname) { $authplugin = get_auth_plugin($authname); $authplugin->loginpage_hook(); } The loginpage_hook() function gives each authentication plugin the chance to intercept the login. Assuming that the login has not been intercepted, the process then continues with a check to ensure the supplied username conforms to the configured standard before calling authenticate_user_login() which, if successful, returns a $user object. OAuth Overview The OAuth authentication mechanism provides secure delegated access. OAuth supports a number of scenarios, including: A client requests access from a server and the server responds with either a ‘confirm’ or ‘deny’. This is called two legged authentication A client requests access from a server and the server, the server then pops up a confirmation dialog so that the user can authorize the access, and then it finally responds with either a ‘confirm’ or ‘deny’. This is called three legged authentication In this article we will be implementing such a mechanism means, in practice, that: An authentication server will only talk to configured clients No passwords are exchanged between server and client – only tokens are exchanged, which are meaningless on their own By default, users need to give permission before resources are accessed Having given an overview, here is the process again described in a little more detail: A new client is configured in the authentication server. A client is allocated a unique client key, along with a secret token (referred to as client secret) The client POSTs an HTTP request to the server (identifying itself using the client key and client secret) and the server responds with a temporary access token. This token is used to request authorization to access protected resources from the server. In this case ‘protected resources’ mean the WordPress API. Access to the WordPress API will allow us to determine details of the currently logged in user. The server responds not with an HTTP response but by POSTing new permanent tokens back to the client via a callback URI (i.e. the server talks to the client directly in order to ensure security). The process ends with the client possessing permanent authorization tokens that can be used to access WP-API functions. Obviously, the most effective way of learning about this process is to implement it so let’s go ahead and do that now. Installing the WordPress OAuth 1.0a server The first step will be to add the OAuth 1.0a server plugin to WordPress. Why not the more recent OAuth 2.0 server plugin? This is because 2.0 only supports https:// and not http://. Also, internally (at least at time of writing) WordPress will only authenticate internally using either OAuth 1.0a or cookies. Log into WordPress as an administrator and, from the Dashboard, hover the mouse over the Plugins menu item and click on Installed Plugins. The Plugins page is displayed. At the top of the page, press the Add New button: As described previously, ensure that you install version 1.0a and not 2.0: Once installed, we need to configure a new client. From the Dashboard menu, hover the mouse over Users and you will see a new Applications menu item has been added. Click on this to display the Registered Applications page. Click the Add New button to display the Add Application page: The Consumer Name is the title for our client that will appear in the Applications page, and Description is a brief explanation of that client to aid with the identification. The Callback is the URI that WordPress will talk to (refer to the outline of the OAuth authentication steps). As we have not yet developed the Moodle/OAuth client end yet you can specify oob in Callback (this stands for ‘out of band’). Once configured, WordPress will generate new OAuth credentials, a Client Key and a Client Secret: Having installed and configured the server end now it’s time to develop the client. Creating a new Moodle auth plugin Before we begin, download the finished plugin from https://github.com/iandavidwild/moodle-auth_wordpress and install it in your local development Moodle instance. The development of a new authentication plugin is described in the developer documentation at https://docs.moodle.org/dev/Authentication_plugins. As described there, let us start with copying the none plugin (the no login authentication method) and using this as a template for our new plugin. In Eclipse, I’m going to copy the none plugin to a new authentication method called wordpress: That done, we need to update the occurrences of auth_none to auth_wordpress. Firstly, rename /auth/wordpress/lang/en/auth_none.php to auth_wordpress.php. Then, in auth.php we need to rename the class auth_plugin_none to auth_plugin_wordpress. As described, the Eclipse Find/Replace function is great for updating scripts: Next, we need to update version information in version.php. Update all the relevant names, descriptions and dates. Finally, we can check that Moodle recognises our new plugin by navigating to the Site administration menu and clicking on Notifications. If installation is successful, our new plugin will be listed on the Available authentication plugins page: Configuration Let us start with considering the plugin configuration. We will need to allow a Moodle administrator to configure the following: The URL of the WordPress installation The client key and client secret provided by WordPress There is very little flexibility in the design of an authentication plugin configuration page so at this stage, rather than creating a wireframe drawing and having this agreed with the client, we can simply go ahead and write the code. The configuration page is defined in /config.html. Remember to start declaring the relevant language strings in /lang/en/auth_wordpress.php. Configuration settings themselves will be managed by the Moodle framework by calling our plugin’s process_config() method. Here is the declaration: /** * Processes and stores configuration data for this authentication plugin. * * @return @bool */ function process_config($config) { // Set to defaults if undefined if (!isset($config->wordpress_host)) { $config->wordpress_host = ‘‘; } if (!isset($config->client_key)) { $config->client_key = ‘‘; } if (!isset($config->client_secret)) { $config->client_secret = ‘‘; } set_config(‘wordpress_host’, trim($config->wordpress_host), ‘auth/wordpress’); set_config(‘client_key’, trim($config->client_key), ‘auth/wordpress’); set_config(‘client_secret’, trim($config->client_secret), ‘auth/wordpress’); return true; } Having dealt with configuration, now let us start managing the actual OAuth process. Handling OAuth calls Rather than go into the details of how we can send HTTP requests to WordPress let’s use a third party library to do this work. The code I’m going to use is based on Abraham Williamstwitteroauth library (see https://github.com/abraham/twitteroauth). In Eclipse, take a look at the files OAuth.php and BasicOAuth.php for details. To use the library, you will need to add the following lines to the top of /wordpress/auth.php: require_once($CFG->dirroot . ‘/auth/wordpress/OAuth.php’); require_once($CFG->dirroot . ‘/auth/wordpress/BasicOAuth.php’); use OAuth1BasicOauth; Let’s now start work on handling the Moodle login event. Handling the Moodle login event When a user clicks on link to a protected resource Moodle calls loginpage_hook() in each enabled authentication plugin. To handle this, let us first implement loginpage_hook(). We need to add the following lines to auth.php: /** * Will get called before the login page is shown. * */ function loginpage_hook() { $client_key = $this->config->client_key; $client_secret = $this->config->client_secret; $wordpress_host = $this->config->wordpress_host; if( (strlen($wordpress_host) > 0) && (strlen($client_key) > 0) && (strlen($client_secret) > 0) ) { // kick ff the authentication process $connection = new BasicOAuth($client_key, $client_secret); // strip the trailing slashes from the end of the host URL to avoid any confusion (and to make the code easier to read) $wordpress_host = rtrim($wordpress_host, ‘/’); $connection->host = $wordpress_host . “/wp-json”; $connection->requestTokenURL = $wordpress_host . “/oauth1/request”; $callback = $CFG->wwwroot . ‘/auth/wordpress/callback.php’; $tempCredentials = $connection->getRequestToken($callback); // Store temporary credentials in the $_SESSION }// if } This implements the first leg of the authentication process and the variable $tempCredentials will now contain a temporary access token. We will need to store these temporary credentials and then call on the server to ask the user to authorize the connection (leg two). Add the following lines immediately after the // Store temporary credentials in the $_SESSION comment: $_SESSION[‘oauth_token’] = $tempCredentials[‘oauth_token’]; $_SESSION[‘oauth_token_secret’] = $tempCredentials[‘oauth_token_secret’]; $connection->authorizeURL = $wordpress_host . “/oauth1/authorize”; $redirect_url = $connection->getAuthorizeURL($tempCredentials); header(‘Location: ‘ . $redirect_url); die; Next, we need to implement the OAuth callback. Create a new script called callback.php: The callback.php script will need to: Sanity check the data being passed back from WordPress and fail gracefully if there is an issue Get the wordpress authentication plugin instance (an instance of auth_plugin_wordpress) Call on a handler method that will perform the authentication (which we will then need to implement) The script is simple, short and available here: https://github.com/iandavidwild/moodle-auth_wordpress/blob/MOODLE_31_STABLE/callback.php Now, in the auth.php script, we need to implement the callback_handler() method to auth_plugin_wordpress. You can check out the code in GitHub. Visit https://github.com/iandavidwild/moodle-auth_wordpress/blob/MOODLE_31_STABLE/auth.php and scroll down to the call_backhandler() method. Lastly, let us add a fragment of code to the loginpage_hook() method that allows us to turn off WordPress authentication in config.php. Add the following to the very beginning of the loginpage_hook() function: global $CFG; if(isset($CFG->disablewordpressauth) && ($CFG->disablewordpressauth == true)) { return; } Summary In this article, we introduced the Moodle learning platform, investigated the open source philosophy that underpins it, and how Moodle’s functionality can be extended and enhanced through plugins. We took a pre-existing plugin to develop a new WordPress authentication module. This will allow a user logged into WordPress to automatically log into Moodle. To do so we implemented three legged Oauth 1.0a WordPress to Moodle authentication. Check out the complete code at https://github.com/iandavidwild/moodle-auth_wordpress/blob/MOODLE_31_STABLE/callback.php. More information on the plugin described in this article is available from the main Moodle website at https://moodle.org/plugins/auth_wordpress. Resources for Article:   Further resources on this subject: Introduction to Moodle [article] Moodle for Online Communities [article] An Introduction to Moodle 3 and MoodleCloud [article]
Read more
  • 0
  • 0
  • 15137

article-image-introduction-kohana-php-framework
Packt
28 Dec 2009
6 min read
Save for later

Introduction to Kohana PHP Framework

Packt
28 Dec 2009
6 min read
Overview Kohana PHP Framework is an open source PHP software development framework that helps php developers to build web applications faster, and also, more effectively by providing them with a set of built-in objects/classes. It also enforces highly organized coding standards. The Kohana PHP Framework is just like Ruby on Rails; it implements the well known software engineering design pattern—Model View Controller(MVC). The Model View Controller software design pattern guides engineers to design their software codes into three separate parts which includes: Models: The objects that manipulate data sources and data stores. Views: The html and css files with inline php codes that present the user interface and controls to the application users. Controllers: Objects in charge of the business logic, displaying the page(views), and routing the click actions from the views to the model and back to the views. Kohana was originally based upon the well documented codeigniter php framework, but it stands out due to its strict use of OOP best practices and standards. Kohana PHP is officially defined by its creators to be a PHP 5 framework that uses the model view controller architectural pattern. It aims to be secure, lightweight, and easy to use. Why use Kohana PHP framework? PHP is a very easy to use programming language, that's the reason why its loved by the web development community. It's easy to use nature and learning curve has attracted a lot of users. But PHP has one downside. Most experienced developers don’t love coding php due to its little Object orientation, and its often nasty codes. With Kohana PHP, any php developer from beginner to expert will all get to write standard codes which we see in the Java World, or the .NET world. Kohana bridges the gap between amateur web developers and designers who love php because its easy, and the experienced web developers who go fully into Object Oriented codes and nothing less. Kohana enforces true software engineering and brings it closer to the php world which has little or no standards. So the major reason why you should use Kohana PHP Framework for your php work is to ensure that your codes are of standards, and you follow the best practices which go a long way to help teams of developers from beginners to experts to easily work together with the PHP programming language. Kohana PHP Framework solves the problem of beginners, and intermediate developers who have nasty codes and are killing the overall work of the team. With the well documented and standard practices of Kohana, a developer in India can write the code and a developer in Africa can optimise the code without asking a lot of questions, because everything will clearly be a model, view or a controller, and it will be easy to understand what it was meant for exactly. Key features of Kohana PHP framework Kohana PHP has some very good features that makes it stand out from all other PHP frameworks. The features include:  Fully Object Oriented: Object oriented programming is an industry standard since the introduction of C++. Many developers working with Java programming languages are comfortable with Object and classes, and they prefer the order it brings to programming. Kohana PHP being fully Object oriented, it automatically brings this order and industry standard programming to PHP; with this, you wont get developers saying you are just a php developer because Kohana PHP is a standard software programming platform for enterprise and web applications. In built templating: Kohana PHP like most other standard web frameworks has a very easy to use template engine. You get a simple html file, covert it into a kohana template by renaming it to template.php, fit in two php variables— $title and $content, to signify where the title of the page and where the content will be displayed. That’s all you need to have a kohana php template. In built Internationalization: Kohana PHP strongly supports internationalization and good documentation. With Kohana, all you need to do is create the various language files and use an internationalization object to reference the files depending on the language from page to page. Switching the language, automatically switches the language file referenced. Robust ORM Engine: ORM stands for Object Relational Mapping. It’s a programming concept in which your codes directly manipulate the database tables as objects, while the SQL gets generated for you behind the scenes. The ORM in Kohana is easy due to its use of convention over configuration. Its the best ORM I have seen in the php world and its 100% stable. To make use of the ORM, you simply extend the ORM class in your model and that model will be able to manipulate your database without any SQL code. MVC: the framework is based on MVC, and this helps to bring about a lot of order, eases code maintenance and team work. I can get any Kohana PHP application, and easily understand what every code does because it must flow from a controller to a view or to a model then to a view. And that’s all about it. Clean and Search Engine Friendly URLs: Kohana PHP framework has a good implementation of controllers where every url call or reference in a kohana based site refers to a controller and its function, as in site/controller/function/parammeters. For example, calling a blog controller to show a post with ID 1 will be as simple as your-site.com/index.php/blog/showPost/1 where index.php is the front controller, blog is a controller object and show post a blog->showPost function that takes a parameter which is the post ID. Libraries and helpers and third party classes: With kohana php, you have a set of libraries and helpers for almost everything you need as a web developer. For instance, sending out an email is as easy as calling email::send() function. There are more helpers and libraries, getting to read about them on the Kohana php website will make you feel like your life as a developer has been taken away and you are just a secretary. Also with Kohana PHP, you can get any available php class from the internet and use it in the kohana framework by simple dropping the file in a folder in your kohana site known as vendor, and then, instantiating the class in your codes. That’s all! It sounds too good to be true but thats all you need to use that PEAR object or that zend class or that php class you have been using for years with kohana framework on your next project.
Read more
  • 0
  • 0
  • 15104
article-image-introduction-moodle-3-and-moodlecloud
Packt
19 Oct 2016
20 min read
Save for later

An Introduction to Moodle 3 and MoodleCloud

Packt
19 Oct 2016
20 min read
In this article by Silvina Paola Hillar, the author of the book Moodle Theme Development we will introduce e-learning and virtual learning environments such as Moodle and MoodleCloud, explaining their similarities and differences. Apart from that, we will learn and understand screen resolution and aspect ratio, which is the information we need in order to develop Moodle themes. In this article, we shall learn the following topics: Understanding what e-learning is Learning about virtual learning environments Introducing Moodle and MoodleCloud Learning what Moodle and MoodleCloud are Using Moodle on different devices Sizing the screen resolution Calculating the aspect ratio Learning about sharp and soft images Learning about crisp and sharp text Understanding what anti-aliasing is (For more resources related to this topic, see here.) Understanding what e-learning is E-learning is electronic learning, meaning that it is not traditional learning in a classroom with a teacher and students, plus the board. E-learning involves using a computer to deliver classes or a course. When delivering classes or a course, there is online interaction between the student and the teacher. There might also be some offline activities, when a student is asked to create a piece of writing or something else. Another option is that there are collaboration activities involving the interaction of several students and the teacher. When creating course content, there is the option of video conferencing as well. So there is virtual face-to-face interaction within the e-learning process. The time and the date should be set beforehand. In this way, e-learning is trying to imitate traditional learning to not lose human contact or social interaction. The course may be full distance or not. If the course is full distance, there is online interaction only. All the resources and activities are delivered online and there might be some interaction through messages, chats, or emails between the student and the teacher. If the course is not full distance, and is delivered face to face but involving the use of computers, we are referring to blended learning. Blended learning means using e-learning within the classroom, and is a mixture of traditional learning and computers. The usage of blended learning with little children is very important, because they get the social element, which is essential at a very young age. Apart from that, they also come into contact with technology while they are learning. It is advisable to use interactive whiteboards (IWBs) at an early stage. IWBs are the right tool to choose when dealing with blended learning. IWBs are motivational gadgets, which are prominent in integrating technology into the classroom. IWBs are considered a symbol of innovation and a key element of teaching students. IWBs offer interactive projections for class demonstrations; we can usually project resources from computer software as well as from our Moodle platform. Students can interact with them by touching or writing on them, that is to say through blended learning. Apart from that, teachers can make presentations on different topics within a subject and these topics become much more interesting and captivating for students, since IWBs allows changes to be made and we can insert interactive elements into the presentation of any subject. There are several types of technology used in IWBs, such as touch technology, laser scanning, and electromagnetic writing tools. Therefore, we have to bear in mind which to choose when we get an IWB. On the other hand, the widespread use of mobile devices nowadays has turned e-learning into mobile learning. Smartphones and tablets allows students to learn anywhere at any time. Therefore, it is important to design course material that is usable by students on such devices. Moodle is a learning platform through which we can design, build and create e-learning environments. It is possible to create online interaction and have video conferencing sessions with students. Distance learning is another option if blended learning cannot be carried out. We can also choose Moodle mobile. We can download the app from App Store, Google Play, Windows Store, or Windows Phone Store. We can browse the content of courses, receive messages, contact people from the courses, upload different types of file, and view course grades, among other actions. Learning about Virtual Learning Environments Virtual Learning Environment (VLE) is a type of virtual environment that supports both resources and learning activities; therefore, students can have both passive and active roles. There is also social interaction, which can take place through collaborative work as well as video conferencing. Students can also be actors, since they can also construct the VLE. VLEs can be used for both distance and blended learning, since they can enrich courses. Mobile learning is also possible because mobile devices have access to the Internet, allowing teachers and students to log in to their courses. VLEs are designed in such a way that they can carry out the following functions or activities: Design, create, store, access, and use course content Deliver or share course content Communicate, interact, and collaborate between students and teachers Assess and personalize the learning experience Modularize both activities and resources Customize the interface We are going to deal with each of these functions and activities and see how useful they might be when designing our VLE for our class. When using Moodle, we can perform all the functions and activities mentioned here, because Moodle is a VLE. Design, create, store, access and use course content If we use the Moodle platform to create the course, we have to deal with course content. Therefore, when we add a course, we have to add its content. We can choose the weekly outline section or the topic under which we want to add the content. We click on Add an activity or resource and two options appear, resources and activities; therefore, the content can be passive or active for the student. When we create or design activities in Moodle, the options are shown in the following screenshot: Another option for creating course content is to reuse content that has already been created and used before in another VLE. In other words, we can import or export course materials, since most VLEs have specific tools designed for such purposes. This is very useful and saves time. There are a variety of ways for teachers to create course materials, due to the fact that the teacher thinks of the methodology, as well as how to meet the student's needs, when creating the course. Moodle is designed in such a way that it offers a variety of combinations that can fit any course content. Deliver or share course content Before using VLEs, we have to log in, because all the content is protected and is not open to the general public. In this way, we can protect property rights, as well as the course itself. All participants must be enrolled in the course unless it has been opened to the public. Teachers can gain remote access in order to create and design their courses. This is quite profitable since they can build the content at home, rather than in their workplace. They need login access and they need to switch roles to course creator in order to create the content. Follow these steps to switch roles to course creator: Under Administration, click on Switch role to… | Course creator, as shown in the following screenshot: When the role has been changed, the teacher can create content that students can access. Once logged in, students have access to the already created content, either activities or resources. The content is available over the Internet or the institution's intranet connection. Students can access the content anywhere if any of these connections are available. If MoodleCloud is being used, there must be an Internet connection, otherwise it is impossible for both students and teachers to log in. Communicate, interact, and collaborate among students and teachers Communication, interaction, and collaborative working are the key factors to social interaction and learning through interchanging ideas. VLEs let us create course content activities, because these actions are allows they are elemental for our class. There is no need to be an isolated learner, because learners have the ability to communicate between themselves and with the teachers. Moodle offers the possibility of video conferencing through the Big Blue Button. In order to install the Big Blue Button plugin in Moodle, visit the following link:https://moodle.org/plugins/browse.php?list=set&id=2. This is shown in the following screenshot: If you are using MoodleCloud, the Big Blue Button is enabled by default, so when we click on Add an activity or resource it appears in the list of activities, as shown in the following screenshot: Assess and personalize the learning experience Moodle allows the teacher to follow the progress of students so that they can assess and grade their work, as long as they complete the activities. Resources cannot be graded, since they are passive content for students, but teachers can also check when a participant last accessed the site. Badges are another element used to personalize the learning experience. We can create badges for students when they complete an activity or a course; they are homework rewards. Badges are quite good at motivating young learners. Modularize both activities and resources Moodle offers the ability to build personalized activities and resources. There are several ways to present both, with all the options Moodle offers. Activities can be molded according to the methodology the teacher uses. In Moodle 3, there are new question types within the Quiz activity. The question types are as follows: Select missing words Drag and drop into text Drag and drop onto image Drag and drop markers The question types are shown after we choose Quiz in the Add a resource or Activity menu, in the weekly outline section or topic that we have chosen. The types of question are shown in the following screenshot: Customize the interface Moodle allows us to customize the interface in order to develop the look and feel that we require; we can add a logo for the school or institution that the Moodle site belongs to. We can also add another theme relevant to the subject or course that we have created. The main purpose in customizing the interface is to avoid all subjects and courses looking the same. Later in the article, we will learn how to customize the interface. Learning Moodle and MoodleCloud Modular Object-Oriented Dynamic Learning Environment (Moodle) is a learning platform designed in such a way that we can create VLEs. Moodle can be downloaded, installed and run on any web server software using Hypertext Preprocessor (PHP). It can support a SQL database and can run on several operating systems. We can download Moodle 3.0.3 from the following URL: https://download.moodle.org/. This URL is shown in the following screenshot: MoodleCloud, on the other hand, does not need to be downloaded since, as its name suggests, is in the cloud. Therefore, we can get our own Moodle site with MoodleCloud within minutes and for free. It is Moodle's hosting platform, designed and run by the people who make Moodle. In order to get a MoodleCloud site, we need to go to the following URL: https://moodle.com/cloud/. This is shown in the following screenshot: MoodleCloud was created in order to cater for users with fewer requirements and small budgets. In order to create an account, you need to add your cell phone number to receive an SMS which we must be input when creating your site. As it is free, there are some limitations to MoodleCloud, unless we contact Moodle Partners and pay for an expanded version of it. The limitations are as follows: No more than 50 users 200 MB disk space Core themes and plugins only One site per phone number Big Blue Button sessions are limited to 6 people, with no recordings There are advertisements When creating a Moodle site, we want to change the look and functionality of the site or individual course. We may also need to customize themes for Moodle, in order to give the course the desired look. Therefore, this article will explain the basic concepts that we have to bear in mind when dealing with themes, due to the fact that themes are shown in different devices. In the past, Moodle ran only on desktops or laptops, but nowadays Moodle can run on many different devices, such as smartphones, tablets, iPads, and smart TVs, and the list goes on. Using Moodle on different devices Moodle can be used on different devices, at different times, in different places. Therefore, there are factors that we need to be aware of when designing courses and themes.. Therefore, here after in this article, there are several aspects and concepts that we need to deepen into in order to understand what we need to take into account when we design our courses and build our themes. Devices change in many ways, not only in size but also in the way they display our Moodle course. Moodle courses can be used on anything from a tiny device that fits into the palm of a hand to a huge IWB or smart TV, and plenty of other devices in between. Therefore, such differences have to be taken into account when choosing images, text, and other components of our course. We are going to deal with sizing screen resolution, calculating the aspect ratio, types of images such as sharp and soft, and crisp and sharp text. Finally, but importantly, the anti-aliasing method is explained. Sizing the screen resolution Number of pixels the display of device has, horizontally and vertically and the color depth measuring the number of bits representing the color of each pixel makes up the screen resolution. The higher the screen resolution, the higher the productivity we get. In the past, the screen resolution of a display was important since it determined the amount of information displayed on the screen. The lower the resolution, the fewer items would fit on the screen; the higher the resolution, the more items would fit on the screen. The resolution varies according to the hardware in each device. Nowadays, the screen resolution is considered a pleasant visual experience, since we would rather see more quality than more stuff on the screen. That is the reason why the screen resolution matters. There might be different display sizes where the screen resolutions are the same, that is to say, the total number of pixels is the same. If we compare a laptop (13'' screen with a resolution of 1280 x 800) and a desktop (with a 17'' monitor with the same 1280 x 800 resolution), although the monitor is larger, the number of pixels is the same; the only difference is that we will be able to see everything bigger on the monitor. Therefore, instead of seeing more stuff, we see higher quality. Screen resolution chart Code Width Height Ratio Description QVGA 320 240 4:3 Quarter Video Graphics Array FHD 1920 1080 ~16:9 Full High Definition HVGA 640 240 8:3 Half Video Graphics Array HD 1360 768 ~16:9 High Definition HD 1366 768 ~16:9 High Definition HD+ 1600 900 ~16:9 High Definition plus VGA 640 480 4:3 Video Graphics Array SVGA 800 600 4:3 Super Video Graphics Array XGA 1024 768 4:3 Extended Graphics Array XGA+ 1152 768 3:2 Extended Graphics Array plus XGA+ 1152 864 4:3 Extended Graphics Array plus SXGA 1280 1024 5:4 Super Extended Graphics Array SXGA+ 1400 1050 4:3 Super Extended Graphics Array plus UXGA 1600 1200 4:3 Ultra Extended Graphics Array QXGA 2048 1536 4:3 Quad Extended Graphics Array WXGA 1280 768 5:3 Wide Extended Graphics Array WXGA 1280 720 ~16:9 Wide Extended Graphics Array WXGA 1280 800 16:10 Wide Extended Graphics Array WXGA 1366 768 ~16:9 Wide Extended Graphics Array WXGA+ 1280 854 3:2 Wide Extended Graphics Array plus WXGA+ 1440 900 16:10 Wide Extended Graphics Array plus WXGA+ 1440 960 3:2 Wide Extended Graphics Array plus WQHD 2560 1440 ~16:9 Wide Quad High Definition WQXGA 2560 1600 16:10 Wide Quad Extended Graphics Array WSVGA 1024 600 ~17:10 Wide Super Video Graphics Array WSXGA 1600 900 ~16:9 Wide Super Extended Graphics Array WSXGA 1600 1024 16:10 Wide Super Extended Graphics Array WSXGA+ 1680 1050 16:10 Wide Super Extended Graphics Array plus WUXGA 1920 1200 16:10 Wide Ultra Extended Graphics Array WQXGA 2560 1600 16:10 Wide Quad Extended Graphics Array WQUXGA 3840 2400 16:10 Wide Quad Ultra Extended Graphics Array 4K UHD 3840 2160 16:9 Ultra High Definition 4K UHD 1536 864 16:9 Ultra High Definition Considering that 3840 x 2160 displays (also known as 4K, QFHD, Ultra HD, UHD, or 2160p) are already available for laptops and monitors, a pleasant visual experience with high DPI displays can be a good long-term investment for your desktop applications. The DPI setting for the monitor causes another common problem. The change in the effective resolution. Consider the 13.3" display that offers a 3200 x1800 resolution and is configured with an OS DPI of 240 DPI. The high DPI setting makes the system use both larger fonts and UI elements; therefore, the elements consume more pixels to render than the same elements displayed in the resolution configured with an OS DPI of 96 DPI. The effective resolution of a display that provides 3200 x1800 pixels configured at 240 DPI is 1280 x 720. The effective resolution can become a big problem because an application that requires a minimum resolution of the old standard 1024 x 768 pixels with an OS DPI of 96 would have problems with a 3200 x 1800-pixel display configured at 240 DPI, and it wouldn't be possible to display all the necessary UI elements. It may sound crazy, but the effective vertical resolution is 720 pixels, lower than the 768 vertical pixels required by the application to display all the UI elements without problems. The formula to calculate the effective resolution is simple: divide the physical pixels by the scale factor (OS DPI / 96). For example, the following formula calculates the horizontal effective resolution of my previous example: 3200 / (240 / 96) = 3200 / 2.5 = 1280; and the following formula calculates the vertical effective resolution: 1800 / (240 / 96) = 1800 / 2.5 = 720. The effective resolution would be of 1800 x 900 pixels if the same physical resolution were configured at 192 DPI. Effective horizontal resolution: 3200 / (192 / 96) = 3200 / 2 = 1600; and vertical effective resolution: 1800 / (192 / 96) = 1800 / 2 = 900. Calculating the aspect ratio The aspect radio is the proportional relationship between the width and the height of an image. It is used to describe the shape of a computer screen or a TV. The aspect ratio of a standard-definition (SD) screen is 4:3, that is to say, a relatively square rectangle. The aspect ratio is often expressed in W:H format, where W stands for width and H stands for height. 4:3 means four units wide to three units high. With regards to high-definition TV (HDTV), they have a 16:9 ratio, which is a wider rectangle. Why do we calculate the aspect ratio? The answer to this question is that the ratio has to be well defined because the rectangular shape that every frame, digital video, canvas, image, or responsive design has, makes shapes fit into different and distinct devices. Learning about sharp and soft images Images can be either sharp or soft. Sharp is the opposite of soft. A soft image has less pronounced details, while a sharp image has more contrast between pixels. The more pixels the image has, the sharper it is. We can soften the image, in which case it loses information, but we cannot sharpen one; in other words, we can't add more information to an image. In order to compare sharp and soft images, we can visit the following website, where we can convert bitmaps to vector graphics. We can convert a bitmap images such as .png, .jpeg, or .gif into a .svg in order to get an anti-aliased image. We can do this with a simple step. We work with an online tool to vectorize the bitmap using http://vectormagic.com/home. There are plenty of features to take into account when vectorizing. We can design a bitmap using an image editor and upload the bitmap image from the clipboard, or upload the file from our computer. Once the image is uploaded to the application, we can start working. Another possibility is to use the sample images on the website, which we are going to use in order to see that anti-aliasing effect. We convert bitmap images, which are made up of pixels, into vector images, which are made up of shapes. The shapes are mathematical descriptions of images and do not become pixelated when scaling up. Vector graphics can handle scaling without any problems. Vector images are the preferred type to work with in graphic design on paper or clothes. Go to http://vectormagic.com/home and click on Examples, as shown in the following screenshot: After clicking on Examples, the bitmap appears on the left and the vectorized image on the right. The bitmap is blurred and soft; the SVG has an anti-aliasing effect, therefore the image is sharp. The result is shown in the following screenshot: Learning about crisp and sharp text There are sharp and soft images, and there is also crisp and sharp text, so it is now time to look at text. What is the main difference between these two? When we say that the text is crisp, we mean that there is more anti-aliasing, in other words it has more grey pixels around the black text. The difference is shown when we zoom in to 400%. On the other hand, sharp mode is superior for small fonts because it makes each letter stronger. There are four options in Photoshop to deal with text: sharp, crisp, strong, and smooth. Sharp and crisp have already been mentioned in the previous paragraphs. Strong is notorious for adding unnecessary weight to letter forms, and smooth looks closest to the untinted anti-aliasing, and it remains similar to the original. Understanding what anti-aliasing is The word anti-aliasing means the technique used in order to minimize the distortion artifacts. It applies intermediate colors in order to eliminate pixels, that is to say the saw-tooth or pixelated lines. Therefore, we need to look for a lower resolution so that the saw-tooth effect does not appear when we make the graphic bigger. Test your knowledge Before we delve deeper into more content, let's test your knowledge about all the information that we have dealt with in this article: Moodle is a learning platform with which… We can design, build and create E-learning environments. We can learn. We can download content for students. BigBlueButtonBN… Is a way to log in to Moodle. Lets you create links to real-time online classrooms from within Moodle. Works only in MoodleCloud. MoodleCloud… Is not open source. Does not allow more than 50 users. Works only for universities. The number of pixels the display of the device has horizontally and vertically, and the color depth measuring the number of bits representing the color of each pixel, make up… Screen resolution. Aspect ratio. Size of device. Anti-aliasing can be applied to … Only text. Only images. Both images and text. Summary In this article, we have covered most of what needs to be known about e-learning, VLEs, and Moodle and MoodleCloud. There is a slight difference between Moodle and MoodleCloud specially if you don't have access to a Moodle course in the institution where you are working and want to design a Moodle course. Moodle is used in different devices and there are several aspects to take into account when designing a course and building a Moodle theme. We have dealt with screen resolution, aspect ratio, types of images and text, and anti-aliasing effects. Resources for Article: Further resources on this subject: Listening Activities in Moodle 1.9: Part 2 [article] Gamification with Moodle LMS [article] Adding Graded Activities [article]
Read more
  • 0
  • 0
  • 14782

article-image-relational-databases-sqlalchemy
Packt
02 Nov 2015
28 min read
Save for later

Relational Databases with SQLAlchemy

Packt
02 Nov 2015
28 min read
In this article by Matthew Copperwaite, author of the book Learning Flask Framework, he talks about how relational databases are the bedrock upon which almost every modern web applications are built. Learning to think about your application in terms of tables and relationships is one of the keys to a clean, well-designed project. We will be using SQLAlchemy, a powerful object relational mapper that allows us to abstract away the complexities of multiple database engines, to work with the database directly from within Python. In this article, we shall: Present a brief overview of the benefits of using a relational database Introduce SQLAlchemy, The Python SQL Toolkit and Object Relational Mapper Configure our Flask application to use SQLAlchemy Write a model class to represent blog entries Learn how to save and retrieve blog entries from the database Perform queries—sorting, filtering, and aggregation Create schema migrations using Alembic (For more resources related to this topic, see here.) Why use a relational database? Our application's database is much more than a simple record of things that we need to save for future retrieval. If all we needed to do was save and retrieve data, we could easily use flat text files. The fact is, though, that we want to be able to perform interesting queries on our data. What's more, we want to do this efficiently and without reinventing the wheel. While non-relational databases (sometimes known as NoSQL databases) are very popular and have their place in the world of web, relational databases long ago solved the common problems of filtering, sorting, aggregating, and joining tabular data. Relational databases allow us to define sets of data in a structured way that maintains the consistency of our data. Using relational databases also gives us, the developers, the freedom to focus on the parts of our app that matter. In addition to efficiently performing ad hoc queries, a relational database server will also do the following: Ensure that our data conforms to the rules set forth in the schema Allow multiple people to access the database concurrently, while at the same time guaranteeing the consistency of the underlying data Ensure that data, once saved, is not lost even in the event of an application crash Relational databases and SQL, the programming language used with relational databases, are topics worthy of an entire book. Because this book is devoted to teaching you how to build apps with Flask, I will show you how to use a tool that has been widely adopted by the Python community for working with databases, namely, SQLAlchemy. SQLAlchemy abstracts away many of the complications of writing SQL queries, but there is no substitute for a deep understanding of SQL and the relational model. For that reason, if you are new to SQL, I would recommend that you check out the colorful book Learn SQL the Hard Way, Zed Shaw available online for free at http://sql.learncodethehardway.org/. Introducing SQLAlchemy SQLAlchemy is an extremely powerful library for working with relational databases in Python. Instead of writing SQL queries by hand, we can use normal Python objects to represent database tables and execute queries. There are a number of benefits to this approach which are listed as follows: Your application can be developed entirely in Python. Subtle differences between database engines are abstracted away. This allows you to do things just like a lightweight database, for instance, use SQLite for local development and testing, then switch to the databases designed for high loads (such as PostgreSQL) in production. Database errors are less common because there are now two layers between your application and the database server: the Python interpreter itself (which will catch the obvious syntax errors), and SQLAlchemy, which has well-defined APIs and it's own layer of error-checking. Your database code may become more efficient, thanks to SQLAlchemy's unit-of-work model which helps reduce unnecessary round-trips to the database. SQLAlchemy also has facilities for efficiently pre-fetching related objects known as eager loading. Object Relational Mapping (ORM) makes your code more maintainable, an asperation known as don't repeat yourself, (DRY). Suppose you add a column to a model. With SQLAlchemy it will be available whenever you use that model. If, on the other hand, you had hand-written SQL queries strewn throughout your app, you would need to update each query, one at a time, to ensure that you were including the new column. SQLAlchemy can help you avoid SQL injection vulnerabilities. Excellent library support: There are a multitude of useful libraries that can work directly with your SQLAlchemy models to provide things like maintenance interfaces and RESTful APIs. I hope you're excited after reading this list. If all the items in this list don't make sense to you right now, don't worry. Now that we have discussed some of the benefits of using SQLAlchemy, let's install it and start coding. If you'd like to learn more about SQLAlchemy, there is an article devoted entirely to its design in The Architecture of Open-Source Applications, available online for free at http://aosabook.org/en/sqlalchemy.html. Installing SQLAlchemy We will use pip to install SQLAlchemy into the blog app's virtualenv. To activate your virtualenv, change directories to source the activate script as follows: $ cd ~/projects/blog $ source bin/activate (blog) $ pip install sqlalchemy Downloading/unpacking sqlalchemy … Successfully installed sqlalchemy Cleaning up... You can check if your installation succeeded by opening a Python interpreter and checking the SQLAlchemy version; note that your exact version number is likely to differ. $ python >>> import sqlalchemy >>> sqlalchemy.__version__ '0.9.0b2' Using SQLAlchemy in our Flask app SQLAlchemy works very well with Flask on its own, but the author of Flask has released a special Flask extension named Flask-SQLAlchemy that provides helpers with many common tasks, and can save us from having to re-invent the wheel later on. Let's use pip to install this extension: (blog) $ pip install flask-sqlalchemy … Successfully installed flask-sqlalchemy Flask provides a standard interface for the developers who are interested in building extensions. As the framework has grown in popularity, the number of high quality extensions has increased. If you'd like to take a look at some of the more popular extensions, there is a curated list available on the Flask project website at http://flask.pocoo.org/extensions/. Choosing a database engine SQLAlchemy supports a multitude of popular database dialects, including SQLite, MySQL, and PostgreSQL. Depending on the database you would like to use, you may need to install an additional Python package containing a database driver. Listed next are several popular databases supported by SQLAlchemy and the corresponding pip-installable driver. Some databases have multiple driver options, so I have listed the most popular one first. Database Driver Package(s) SQLite Not needed, part of the Python standard library since version 2.5 MySQL MySQL-python, PyMySQL (pure Python), OurSQL PostgreSQL psycopg2 Firebird fdb Microsoft SQL Server pymssql, PyODBC Oracle cx-Oracle SQLite comes as standard with Python and does not require a separate server process, so it is perfect for getting up and running quickly. For simplicity in the examples that follow, I will demonstrate how to configure the blog app for use with SQLite. If you have a different database in mind that you would like to use for the blog project, feel free to use pip to install the necessary driver package at this time. Connecting to the database Using your favorite text editor, open the config.py module for our blog project (~/projects/blog/app/config.py). We are going to add an SQLAlchemy specific setting to instruct Flask-SQLAlchemy how to connect to our database. The new lines are highlighted in the following: class Configuration(object): APPLICATION_DIR = current_directory DEBUG = True SQLALCHEMY_DATABASE_URI = 'sqlite:///%s/blog.db' % APPLICATION_DIR The SQLALCHEMY_DATABASE_URIis comprised of the following parts: dialect+driver://username:password@host:port/database Because SQLite databases are stored in local files, the only information we need to provide is the path to the database file. On the other hand, if you wanted to connect to PostgreSQL running locally, your URI might look something like this: postgresql://postgres:secretpassword@localhost:5432/blog_db If you're having trouble connecting to your database, try consulting the SQLAlchemy documentation on the database URIs:http://docs.sqlalchemy.org/en/rel_0_9/core/engines.html Now that we've specified how to connect to the database, let's create the object responsible for actually managing our database connections. This object is provided by the Flask-SQLAlchemy extension and is conveniently named SQLAlchemy. Open app.py and make the following additions: from flask import Flask from flask.ext.sqlalchemy import SQLAlchemy from config import Configuration app = Flask(__name__) app.config.from_object(Configuration) db = SQLAlchemy(app) These changes instruct our Flask app, and in turn SQLAlchemy, how to communicate with our application's database. The next step will be to create a table for storing blog entries and to do so, we will create our first model. Creating the Entry model A model is the data representation of a table of data that we want to store in the database. These models have attributes called columns that represent the data items in the data. So, if we were creating a Person model, we might have columns for storing the first and last name, date of birth, home address, hair color, and so on. Since we are interested in creating a model to represent blog entries, we will have columns for things like the title and body content. Note that we don't say a People model or Entries model – models are singular even though they commonly represent many different objects. With SQLAlchemy, creating a model is as easy as defining a class and specifying a number of attributes assigned to that class. Let's start with a very basic model for our blog entries. Create a new file named models.py in the blog project's app/ directory and enter the following code: import datetime, re from app import db def slugify(s): return re.sub('[^w]+', '-', s).lower() class Entry(db.Model): id = db.Column(db.Integer, primary_key=True) title = db.Column(db.String(100)) slug = db.Column(db.String(100), unique=True) body = db.Column(db.Text) created_timestamp = db.Column(db.DateTime, default=datetime.datetime.now) modified_timestamp = db.Column( db.DateTime, default=datetime.datetime.now, onupdate=datetime.datetime.now) def __init__(self, *args, **kwargs): super(Entry, self).__init__(*args, **kwargs) # Call parent constructor. self.generate_slug() def generate_slug(self): self.slug = '' if self.title: self.slug = slugify(self.title) def __repr__(self): return '<Entry: %s>' % self.title There is a lot going on, so let's start with the imports and work our way down. We begin by importing the standard library datetime and re modules. We will be using datetime to get the current date and time, and re to do some string manipulation. The next import statement brings in the db object that we created in app.py. As you recall, the db object is an instance of the SQLAlchemy class, which is a part of the Flask-SQLAlchemy extension. The db object provides access to the classes that we need to construct our Entry model, which is just a few lines ahead. Before the Entry model, we define a helper function slugify, which we will use to give our blog entries some nice URLs. The slugify function takes a string like A post about Flask and uses a regular expression to turn a string that is human-readable in a URL, and so returns a-post-about-flask. Next is the Entry model. Our Entry model is a normal class that extends db.Model. By extending the db.Model our Entry class will inherit a variety of helpers which we'll use to query the database. The attributes of the Entry model, are a simple mapping of the names and data that we wish to store in the database and are listed as follows: id: This is the primary key for our database table. This value is set for us automatically by the database when we create a new blog entry, usually an auto incrementing number for each new entry. While we will not explicitly set this value, a primary key comes in handy when you want to refer one model to another. title: The title for a blog entry, stored as a String column with a maximum length of 100. slug: The URL-friendly representation of the title, stored as a String column with a maximum length of 100. This column also specifies unique=True, so that no two entries can share the same slug. body: The actual content of the post, stored in a Text column. This differs from the String type of the Title and Slug as you can store as much text as you like in this field. created_timestamp: The time a blog entry was created, stored in a DateTime column. We instruct SQLAlchemy to automatically populate this column with the current time by default when an entry is first saved. modified_timestamp: The time a blog entry was last updated. SQLAlchemy will automatically update this column with the current time whenever we save an entry. For short strings such as titles or names of things, the String column is appropriate, but when the text may be especially long it is better to use a Text column, as we did for the entry body. We've overridden the constructor for the class (__init__) so that when a new model is created, it automatically sets the slug for us based on the title. The last piece is the __repr__ method which is used to generate a helpful representation of instances of our Entry class. The specific meaning of __repr__ is not important but allows you to reference the object that the program is working with, when debugging. A final bit of code needs to be added to main.py, the entry-point to our application, to ensure that the models are imported. Add the highlighted changes to main.py as follows: from app import app, db import models import views if __name__ == '__main__': app.run() Creating the Entry table In order to start working with the Entry model, we first need to create a table for it in our database. Luckily, Flask-SQLAlchemy comes with a nice helper for doing just this. Create a new sub-folder named scripts in the blog project's app directory. Then create a file named create_db.py: (blog) $ cd app/ (blog) $ mkdir scripts (blog) $ touch scripts/create_db.py Add the following code to the create_db.py module. This function will automatically look at all the code that we have written and create a new table in our database for the Entry model based on our models: from main import db if __name__ == '__main__': db.create_all() Execute the script from inside the app/ directory. Make sure the virtualenv is active. If everything goes successfully, you should see no output. (blog) $ python create_db.py (blog) $ If you encounter errors while creating the database tables, make sure you are in the app directory, with the virtualenv activated, when you run the script. Next, ensure that there are no typos in your SQLALCHEMY_DATABASE_URI setting. Working with the Entry model Let's experiment with our new Entry model by saving a few blog entries. We will be doing this from the Python interactive shell. At this stage let's install IPython, a sophisticated shell with features like tab-completion (that the default Python shell lacks): (blog) $ pip install ipython Now check if we are in the app directory and let's start the shell and create a couple of entries as follows: (blog) $ ipython In []: from models import * # First things first, import our Entry model and db object. In []: db # What is db? Out[]: <SQLAlchemy engine='sqlite:////home/charles/projects/blog/app/blog.db'> If you are familiar with the normal Python shell but not IPython, things may look a little different at first. The main thing to be aware of is that In[] refers to the code you type in, and Out[] is the output of the commands you put in to the shell. IPython has a neat feature that allows you to print detailed information about an object. This is done by typing in the object's name followed by a question-mark (?). Introspecting the Entry model provides a bit of information, including the argument signature and the string representing that object (known as the docstring) of the constructor: In []: Entry? # What is Entry and how do we create it? Type: _BoundDeclarativeMeta String Form:<class 'models.Entry'> File: /home/charles/projects/blog/app/models.py Docstring: <no docstring> Constructor information: Definition:Entry(self, *args, **kwargs) We can create Entry objects by passing column values in as the keyword-arguments. In the preceding example, it uses **kwargs; this is a shortcut for taking a dict object and using it as the values for defining the object, as shown next: In []: first_entry = Entry(title='First entry', body='This is the body of my first entry.') In order to save our first entry, we will to add it to the database session. The session is simply an object that represents our actions on the database. Even after adding it to the session, it will not be saved to the database yet. In order to save the entry to the database, we need to commit our session: In []: db.session.add(first_entry) In []: first_entry.id is None # No primary key, the entry has not been saved. Out[]: True In []: db.session.commit() In []: first_entry.id Out[]: 1 In []: first_entry.created_timestamp Out[]: datetime.datetime(2014, 1, 25, 9, 49, 53, 1337) As you can see from the preceding code examples, once we commit the session, a unique id will be assigned to our first entry and the created_timestamp will be set to the current time. Congratulations, you've created your first blog entry! Try adding a few more on your own. You can add multiple entry objects to the same session before committing, so give that a try as well. At any point while you are experimenting, feel free to delete the blog.db file and re-run the create_db.py script to start over with a fresh database. Making changes to an existing entry In order to make changes to an existing Entry, simply make your edits and then commit. Let's retrieve our Entry using the id that was returned to use earlier, make some changes and commit it. SQLAlchemy will know that it needs to be updated. Here is how you might make edits to the first entry: In []: first_entry = Entry.query.get(1) In []: first_entry.body = 'This is the first entry, and I have made some edits.' In []: db.session.commit() And just like that your changes are saved. Deleting an entry Deleting an entry is just as easy as creating one. Instead of calling db.session.add, we will call db.session.delete and pass in the Entry instance that we wish to remove: In []: bad_entry = Entry(title='bad entry', body='This is a lousy entry.') In []: db.session.add(bad_entry) In []: db.session.commit() # Save the bad entry to the database. In []: db.session.delete(bad_entry) In []: db.session.commit() # The bad entry is now deleted from the database. Retrieving blog entries While creating, updating, and deleting are fairly straightforward operations, the real fun starts when we look at ways to retrieve our entries. We'll start with the basics, and then work our way up to more interesting queries. We will use a special attribute on our model class to make queries: Entry.query. This attribute exposes a variety of APIs for working with the collection of entries in the database. Let's simply retrieve a list of all the entries in the Entry table: In []: entries = Entry.query.all() In []: entries # What are our entries? Out[]: [<Entry u'First entry'>, <Entry u'Second entry'>, <Entry u'Third entry'>, <Entry u'Fourth entry'>] As you can see, in this example, the query returns a list of Entry instances that we created. When no explicit ordering is specified, the entries are returned to us in an arbitrary order chosen by the database. Let's specify that we want the entries returned to us in an alphabetical order by title: In []: Entry.query.order_by(Entry.title.asc()).all() Out []: [<Entry u'First entry'>, <Entry u'Fourth entry'>, <Entry u'Second entry'>, <Entry u'Third entry'>] Shown next is how you would list your entries in reverse-chronological order, based on when they were last updated: In []: oldest_to_newest = Entry.query.order_by(Entry.modified_timestamp.desc()).all() Out []: [<Entry: Fourth entry>, <Entry: Third entry>, <Entry: Second entry>, <Entry: First entry>] Filtering the list of entries It is very useful to be able to retrieve the entire collection of blog entries, but what if we want to filter the list? We could always retrieve the entire collection and then filter it in Python using a loop, but that would be very inefficient. Instead we will rely on the database to do the filtering for us, and simply specify the conditions for which entries should be returned. In the following example, we will specify that we want to filter by entries where the title equals 'First entry'. In []: Entry.query.filter(Entry.title == 'First entry').all() Out[]: [<Entry u'First entry'>] If this seems somewhat magical to you, it's because it really is! SQLAlchemy uses operator overloading to convert expressions like <Model>.<column> == <some value> into an abstracted object called BinaryExpression. When you are ready to execute your query, these data-structures are then translated into SQL. A BinaryExpression is simply an object that represents the logical comparison and is produced by over-riding the standards methods that are typically called on an object when comparing values in Python. In order to retrieve a single entry, you have two options, .first() and .one(). Their differences and similarities are summarized in the following table: Number of matching rows first() behavior one() behavior 1 Return the object. Return the object. 0 Return None. Raise sqlalchemy.orm.exc.NoResultFound 2+ Return the first object (based on either explicit ordering or the ordering chosen by the database). Raise sqlalchemy.orm.exc.MultipleResultsFound Let's try the same query as before, but instead of calling .all(), we will call .first() to retrieve a single Entry instance: In []: Entry.query.filter(Entry.title == 'First entry').first() Out[]: <Entry u'First entry'> Notice how previously .all() returned a list containing the object, whereas .first() returned just the object itself. Special lookups In the previous example we tested for equality, but there are many other types of lookups possible. In the following table, have listed some that you may find useful. A complete list can be found in the SQLAlchemy documentation. Example Meaning Entry.title == 'The title' Entries where the title is "The title", case-sensitive. Entry.title != 'The title' Entries where the title is not "The title". Entry.created_timestamp < datetime.date(2014, 1, 25) Entries created before January 25, 2014. For less than or equal, use <=. Entry.created_timestamp > datetime.date(2014, 1, 25) Entries created after January 25, 2014. For greater than or equal, use >=. Entry.body.contains('Python') Entries where the body contains the word "Python", case-sensitive. Entry.title.endswith('Python') Entries where the title ends with the string "Python", case-sensitive. Note that this will also match titles that end with the word "CPython", for example. Entry.title.startswith('Python') Entries where the title starts with the string "Python", case-sensitive. Note that this will also match titles like "Pythonistas". Entry.body.ilike('%python%') Entries where the body contains the word "python" anywhere in the text, case-insensitive. The "%" character is a wild-card. Entry.title.in_(['Title one', 'Title two']) Entries where the title is in the given list, either 'Title one' or 'Title two'. Combining expressions The expressions listed in the preceding table can be combined using bitwise operators to produce arbitrarily complex expressions. Let's say we want to retrieve all blog entries that have the word Python or Flask in the title. To accomplish this, we will create two contains expressions, then combine them using Python's bitwise OR operator which is a pipe| character unlike a lot of other languages that use a double pipe || character: Entry.query.filter(Entry.title.contains('Python') | Entry.title.contains('Flask')) Using bitwise operators, we can come up with some pretty complex expressions. Try to figure out what the following example is asking for: Entry.query.filter( (Entry.title.contains('Python') | Entry.title.contains('Flask')) & (Entry.created_timestamp > (datetime.date.today() - datetime.timedelta(days=30))) ) As you probably guessed, this query returns all entries where the title contains either Python or Flask, and which were created within the last 30 days. We are using Python's bitwise OR and AND operators to combine the sub-expressions. For any query you produce, you can view the generated SQL by printing the query as follows: In []: query = Entry.query.filter( (Entry.title.contains('Python') | Entry.title.contains('Flask')) & (Entry.created_timestamp > (datetime.date.today() - datetime.timedelta(days=30))) ) In []: print str(query) SELECT entry.id AS entry_id, ... FROM entry WHERE ( (entry.title LIKE '%%' || :title_1 || '%%') OR (entry.title LIKE '%%' || :title_2 || '%%') ) AND entry.created_timestamp > :created_timestamp_1 Negation There is one more piece to discuss, which is negation. If we wanted to get a list of all blog entries which did not contain Python or Flask in the title, how would we do that? SQLAlchemy provides two ways to create these types of expressions, using either Python's unary negation operator (~) or by calling db.not_(). This is how you would construct this query with SQLAlchemy: Using unary negation: In []: Entry.query.filter(~(Entry.title.contains('Python') | Entry.title.contains('Flask')))   Using db.not_(): In []: Entry.query.filter(db.not_(Entry.title.contains('Python') | Entry.title.contains('Flask'))) Operator precedence Not all operations are considered equal to the Python interpreter. This is like in math class, where we learned that expressions like 2 + 3 * 4 are equal to 14 and not 20, because the multiplication operation occurs first. In Python, bitwise operators all have a higher precedence than things like equality tests, so this means that when you are building your query expression, you have to pay attention to the parentheses. Let's look at some example Python expressions and see the corresponding query: Expression Result (Entry.title == 'Python' | Entry.title == 'Flask') Wrong! SQLAlchemy throws an error because the first thing to be evaluated is actually the 'Python' | Entry.title! (Entry.title == 'Python') | (Entry.title == 'Flask') Right. Returns entries where the title is either "Python" or "Flask". ~Entry.title == 'Python' Wrong! SQLAlchemy will turn this into a valid SQL query, but the results will not be meaningful. ~(Entry.title == 'Python') Right. Returns entries where the title is not equal to "Python". If you find yourself struggling with the operator precedence, it's a safe bet to put parentheses around any comparison that uses ==, !=, <, <=, >, and >=. Making changes to the schema The final topic we will discuss in this article is how to make modifications to an existing Model definition. From the project specification, we know we would like to be able to save drafts of our blog entries. Right now we don't have any way to tell whether an entry is a draft or not, so we will need to add a column that let's us store the status of our entry. Unfortunately, while db.create_all() works perfectly for creating tables, it will not automatically modify an existing table; to do this we need to use migrations. Adding Flask-Migrate to our project We will use Flask-Migrate to help us automatically update our database whenever we change the schema. In the blog virtualenv, install Flask-migrate using pip: (blog) $ pip install flask-migrate The author of SQLAlchemy has a project called alembic; Flask-Migrate makes use of this and integrates it with Flask directly, making things easier. Next we will add a Migrate helper to our app. We will also create a script manager for our app. The script manager allows us to execute special commands within the context of our app, directly from the command-line. We will be using the script manager to execute the migrate command. Open app.py and make the following additions: from flask import Flask from flask.ext.migrate import Migrate, MigrateCommand from flask.ext.script import Manager from flask.ext.sqlalchemy import SQLAlchemy from config import Configuration app = Flask(__name__) app.config.from_object(Configuration) db = SQLAlchemy(app) migrate = Migrate(app, db) manager = Manager(app) manager.add_command('db', MigrateCommand) In order to use the manager, we will add a new file named manage.py along with app.py. Add the following code to manage.py: from app import manager from main import * if __name__ == '__main__': manager.run() This looks very similar to main.py, the key difference being that instead of calling app.run(), we are calling manager.run(). Django has a similar, although auto-generated, manage.py file that serves a similar function. Creating the initial migration Before we can start changing our schema, we need to create a record of its current state. To do this, run the following commands from inside your blog's app directory. The first command will create a migrations directory inside the app folder which will track the changes we make to our schema. The second command db migrate will create a snapshot of our current schema so that future changes can be compared to it. (blog) $ python manage.py db init Creating directory /home/charles/projects/blog/app/migrations ... done ... (blog) $ python manage.py db migrate INFO [alembic.migration] Context impl SQLiteImpl. INFO [alembic.migration] Will assume non-transactional DDL. Generating /home/charles/projects/blog/app/migrations/versions/535133f91f00_.py ... done Finally, we will run db upgrade to run the migration which will indicate to the migration system that everything is up-to-date: (blog) $ python manage.py db upgrade INFO [alembic.migration] Context impl SQLiteImpl. INFO [alembic.migration] Will assume non-transactional DDL. INFO [alembic.migration] Running upgrade None -> 535133f91f00, empty message Adding a status column Now that we have a snapshot of our current schema, we can start making changes. We will be adding a new column named status, which will store an integer value corresponding to a particular status. Although there are only two statuses at the moment (PUBLIC and DRAFT), using an integer instead of a Boolean gives us the option to easily add more statuses in the future. Open models.py and make the following additions to the Entry model: class Entry(db.Model): STATUS_PUBLIC = 0 STATUS_DRAFT = 1 id = db.Column(db.Integer, primary_key=True) title = db.Column(db.String(100)) slug = db.Column(db.String(100), unique=True) body = db.Column(db.Text) status = db.Column(db.SmallInteger, default=STATUS_PUBLIC) created_timestamp = db.Column(db.DateTime, default=datetime.datetime.now) ... From the command-line, we will once again be running db migrate to generate the migration script. You can see from the command's output that it found our new column: (blog) $ python manage.py db migrate INFO [alembic.migration] Context impl SQLiteImpl. INFO [alembic.migration] Will assume non-transactional DDL. INFO [alembic.autogenerate.compare] Detected added column 'entry.status' Generating /home/charles/projects/blog/app/migrations/versions/2c8e81936cad_.py ... done Because we have blog entries in the database, we need to make a small modification to the auto-generated migration to ensure the statuses for the existing entries are initialized to the proper value. To do this, open up the migration file (mine is migrations/versions/2c8e81936cad_.py) and change the following line: op.add_column('entry', sa.Column('status', sa.SmallInteger(), nullable=True)) The replacement of nullable=True with server_default='0' tells the migration script to not set the column to null by default, but instead to use 0: op.add_column('entry', sa.Column('status', sa.SmallInteger(), server_default='0')) Finally, run db upgrade to run the migration and create the status column: (blog) $ python manage.py db upgrade INFO [alembic.migration] Context impl SQLiteImpl. INFO [alembic.migration] Will assume non-transactional DDL. INFO [alembic.migration] Running upgrade 535133f91f00 -> 2c8e81936cad, empty message Congratulations, your Entry model now has a status field! Summary By now you should be familiar with using SQLAlchemy to work with a relational database. We covered the benefits of using a relational database and an ORM, configured a Flask application to connect to a relational database, and created SQLAlchemy models. All this allowed us to create relationships between our data and perform queries. To top it off, we also used a migration tool to handle future database schema changes. We will set aside the interactive interpreter and start creating views to display blog entries in the web browser. We will put all our SQLAlchemy knowledge to work by creating interesting lists of blog entries, as well as a simple search feature. We will build a set of templates to make the blogging site visually appealing, and learn how to use the Jinja2 templating language to eliminate repetitive HTML coding. Resources for Article:   Further resources on this subject: Man, Do I Like Templates! [article] Snap – The Code Snippet Sharing Application [article] Deploying on your own server [article]
Read more
  • 0
  • 0
  • 14720
Modal Close icon
Modal Close icon