Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-getting-organized-npm-and-bower
Packt
06 Oct 2016
13 min read
Save for later

Getting Organized with NPM and Bower

Packt
06 Oct 2016
13 min read
In this article by Philip Klauzinski and John Moore, the authors of the book Mastering JavaScript Single Page Application Development, we will learn about the basics of NMP and Bower. JavaScript was the bane of the web development industry during the early days of the browser-rendered Internet. Now, powers hugely impactful libraries such as jQuery, and JavaScript-rendered content (as opposed to server-side-rendered content) is even indexed by many search engines. What was once largely considered an annoying language used primarily to generate popup windows and alert boxes has now become, arguably, the most popular programming language in the world. (For more resources related to this topic, see here.) Not only is JavaScript now more prevalent than ever in frontend architecture, but it has become a server-side language as well, thanks to the Node.js runtime. We have also now seen the proliferation of document-oriented databases, such as MongoDB, which store and return JSON data. With JavaScript present throughout the development stack, the door is now open for JavaScript developers to become full-stack developers without the need to learn a traditional server-side language. Given the right tools and know-how, any JavaScript developer can create single page applications (SPAs) comprising entirely the language they know best, and they can do so using an architecture such as MEAN (MongoDB, Express, AngularJS, and Node.js). Organization is key to the development of any complex single page application. If you don't get organized from the beginning, you are sure to introduce an inordinate number of regressions to your app. The Node.js ecosystem will help you do this with a full suite of indispensable and open source tools, three of which we will discuss here. In this article, you will learn about: Node Package Manager The Bower front-end package manager What is Node Package Manager? Within any full-stack JavaScript environment, Node Package Manager (NPM) will be your go-to tool for setting up your development environment and managing server-side libraries. NPM can be used within both global and isolated environment contexts. We will first explore the use of NPM globally. Installing Node.js and NPM NPM is a component of Node.js, so before you can use it, you must install Node.js. You can find installers for both Mac and Windows at nodejs.org. Once you have Node.js installed, using NPM is incredibly easy and is done from the command-line interface (CLI). Start by ensuring you have the latest version of NPM installed, as it is updated more often than Node.js itself: $ npm install -g npm When using NPM, the -g option will apply your changes to your global environment. In this case, you want your version of NPM to apply globally. As stated previously, NPM can be used to manage packages both globally and within isolated environments. Therefore, we want essential development tools to be applied globally so that you can use them in multiple projects on the same system. On Mac and some Unix-based systems, you may have to run the npm command as the superuser (prefix the command with sudo) in order to install packages globally, depending on how NPM was installed. If you run into this issue and wish to remove the need to prefix npm with sudo, see docs.npmjs.com/getting-started/fixing-npm-permissions. Configuring your package.json file For any project you develop, you will keep a local package.json file to manage your Node.js dependencies. This file should be stored at the root of your project directory, and it will only pertain to that isolated environment. This allows you to have multiple Node.js projects with different dependency chains on the same system. When beginning a new project, you can automate the creation of the package.json file from the command line: $ npm init Running npm init will take you through a series of JSON property names to define through command-line prompts, including your app's name, version number, description, and more. The name and version properties are required, and your Node.js package will not install without them being defined. Several of the properties will have a default value given within parentheses in the prompt so that you may simply hit Enter to continue. Other properties will simply allow you to hit Enter with a blank entry and will not be saved to the package.json file or be saved with a blank value: name: (my-app) version: (1.0.0) description: entry point: (index.js) The entry point prompt will be defined as the main property in package.json and is not necessary unless you are developing a Node.js application. In our case, we can forgo this field. The npm init command may in fact force you to save the main property, so you will have to edit package.json afterward to remove it; however, that field will have no effect on your web app. You may also choose to create the package.json file manually using a text editor if you know the appropriate structure to employ. Whichever method you choose, your initial version of the package.json file should look similar to the following example: { "name": "my-app", "version": "1.0.0", "author": "Philip Klauzinski", "license": "MIT", "description": "My JavaScript single page application." } If you want your project to be private and want to ensure that it does not accidently get published to the NPM registry, you may want to add the private property to your package.json file and set it to true. Additionally, you may remove some properties that only apply to a registered package: { "name": "my-app", "author": "Philip Klauzinski", "description": "My JavaScript single page application.", "private": true } Once you have your package.json file set up the way you like it, you can begin installing Node.js packages locally for your app. This is where the importance of dependencies begins to surface. NPM dependencies There are three types of dependencies that can be defined for any Node.js project in your package.json file: dependencies, devDependencies, and peerDependencies. For the purpose of building a web-based SPA, you will only need to use the devDependencies declaration. The devDependencies ones are those that are required for developing your application, but not required for its production environment or for simply running it. If other developers want to contribute to your Node.js application, they will need to run npm install from the command line to set up the proper development environment. For information on the other types of dependencies, see docs.npmjs.com. When adding devDependencies to your package.json file, the command line again comes to the rescue. Let's use the installation of Browserify as an example: $ npm install browserify --save-dev This will install Browserify locally and save it along with its version range to the devDependencies object in your package.json file. Once installed, your package.json file should look similar to the following example: { "name": "my-app", "version": "1.0.0", "author": "Philip Klauzinski", "license": "MIT", "devDependencies": { "browserify": "^12.0.1" } } The devDependencies object will store each package as key-value pairs, in which the key is the package name and the value is the version number or version range. Node.js uses semantic versioning, where the three digits of the version number represent MAJOR.MINOR.PATCH. For more information on semantic version formatting, see semver.org. Updating your development dependencies You will notice that the version number of the installed package is preceded by a caret (^) symbol by default. This means that package updates will only allow patch and minor updates for versions above 1.0.0. This is meant to prevent major version changes from breaking your dependency chain when updating your packages to the latest versions. To update your devDependencies and save the new version numbers, you will enter the following from the command line. $ npm update --save-dev Alternatively, you can use the -D option as a shortcut for --save-dev: $ npm update -D To update all globally installed NPM packages to their latest versions, run npm update with the -g option: $ npm update -g For more information on semantic versioning within NPM, see docs.npmjs.com/misc/semver. Now that you have NPM set up and you know how to install your development dependencies, you can move on to installing Bower. Bower Bower is a package manager for frontend web assets and libraries. You will use it to maintain your frontend stack and control version chains for libraries such as jQuery, AngularJS, and any other components necessary to your app's web interface. Installing Bower Bower is also a Node.js package, so you will install it using NPM, much like you did with the Browserify example installation in the previous section, but this time you will be installing the package globally. This will allow you to run bower from the command line anywhere on your system without having to install it locally for each project. $ npm install -g bower You can alternatively install Bower locally as a development dependency so that you may maintain different versions of it for different projects on the same system, but this is generally not necessary. $ npm install bower --save-dev Next, check that Bower is properly installed by querying the version from the command line. $ bower -v Bower also requires the Git version control system (VCS) to be installed on your system in order to work with packages. This is because Bower communicates directly with GitHub for package management data. If you do not have Git installed on your system, you can find instructions for Linux, Mac, and Windows at git-scm.com. Configuring your bower.json file The process of setting up your bower.json file is comparable to that of the package.json file for NPM. It uses the same JSON format, has both dependencies and devDependencies, and can also be automatically created. $ bower init Once you type bower init from the command line, you will be prompted to define several properties with some defaults given within parentheses: ? name: my-app ? version: 0.0.0 ? description: My app description. ? main file: index.html ? what types of modules does this package expose? (Press <space> to? what types of modules does this package expose? globals ? keywords: my, app, keywords ? authors: Philip Klauzinski ? license: MIT ? homepage: http://gui.ninja ? set currently installed components as dependencies? No ? add commonly ignored files to ignore list? Yes ? would you like to mark this package as private which prevents it from being accidentally published to the registry? Yes These questions may vary depending on the version of Bower you install. Most properties in the bower.json file are not necessary unless you are publishing your project to the Bower registry, indicated in the final prompt. You will most likely want to mark your package as private unless you plan to register it and allow others to download it as a Bower package. Once you have created the bower.json file, you can open it in a text editor and change or remove any properties you wish. It should look something like the following example: { "name": "my-app", "version": "0.0.0", "authors": [ "Philip Klauzinski" ], "description": "My app description.", "main": "index.html", "moduleType": [ "globals" ], "keywords": [ "my", "app", "keywords" ], "license": "MIT", "homepage": "http://gui.ninja", "ignore": [ "**/.*", "node_modules", "bower_components", "test", "tests" ], "private": true } If you wish to keep your project private, you can reduce your bower.json file to two properties before continuing: { "name": "my-app", "private": true } Once you have the initial version of your bower.json file set up the way you like it, you can begin installing components for your app. Bower components location and the .bowerrc file Bower will install components into a directory named bower_components by default. This directory will be located directly under the root of your project. If you wish to install your Bower components under a different directory name, you must create a local system file named .bowerrc and define the custom directory name there: { "directory": "path/to/my_components" } An object with only a single directory property name is all that is necessary to define a custom location for your Bower components. There are many other properties that can be configured within a .bowerrc file. For more information on configuring Bower, see bower.io/docs/config/. Bower dependencies Bower also allows you to define both the dependencies and devDependencies objects like NPM. The distinction with Bower, however, is that the dependencies object will contain the components necessary for running your app, while the devDependencies object is reserved for components that you might use for testing, transpiling, or anything that does not need to be included in your frontend stack. Bower packages are managed using the bower command from the CLI. This is a user command, so it does not require super user (sudo) permissions. Let's begin by installing jQuery as a frontend dependency for your app: $ bower install jquery --save The --save option on the command line will save the package and version number to the dependencies object in bower.json. Alternatively, you can use the -S option as a shortcut for --save: $ bower install jquery -S Next, let's install the Mocha JavaScript testing framework as a development dependency: $ bower install mocha --save-dev In this case, we will use --save-dev on the command line to save the package to the devDependencies object instead. Your bower.json file should now look similar to the following example: { "name": "my-app", "private": true, "dependencies": { "jquery": "~2.1.4" }, "devDependencies": { "mocha": "~2.3.4" } } Alternatively, you can use the -D option as a shortcut for --save-dev: $ bower install mocha –D You will notice that the package version numbers are preceded by the tilde (~) symbol by default, in contrast to the caret (^) symbol, as is the case with NPM. The tilde serves as a more stringent guard against package version updates. With a MAJOR.MINOR.PATCH version number, running bower update will only update to the latest patch version. If a version number is composed of only the major and minor versions, bower update will update the package to the latest minor version. Searching the Bower registry All registered Bower components are indexed and searchable through the command line. If you don't know the exact package name of a component you wish to install, you can perform a search to retrieve a list of matching names. Most components will have a list of keywords within their bower.json file so that you can more easily find the package without knowing the exact name. For example, you may want to install PhantomJS for headless browser testing: $ bower search phantomjs The list returned will include any package with phantomjs in the package name or within its keywords list: phantom git://github.com/ariya/phantomjs.git dt-phantomjs git://github.com/keesey/dt-phantomjs qunit-phantomjs-runner git://github.com/jonkemp/... parse-cookie-phantomjs git://github.com/sindresorhus/... highcharts-phantomjs git://github.com/pesla/highcharts-phantomjs.git mocha-phantomjs git://github.com/metaskills/mocha-phantomjs.git purescript-phantomjs git://github.com/cxfreeio/purescript-phantomjs.git You can see from the returned list that the correct package name for PhantomJS is in fact phantom and not phantomjs. You can then proceed to install the package now that you know the correct name: $ bower install phantom --save-dev Now, you have Bower installed and know how to manage your frontend web components and development tools, but how do you integrate them into your SPA? This is where Grunt comes in. Summary Now that you have learned to set up an optimal development environment with NPM and supply it with frontend dependencies using Bower, it's time to start learning more about building a real app. Resources for Article: Further resources on this subject: API with MongoDB and Node.js [article] Tips & Tricks for Ext JS 3.x [article] Responsive Visualizations Using D3.js and Bootstrap [article]
Read more
  • 0
  • 0
  • 12818

article-image-you-begin
Packt
13 Jul 2016
14 min read
Save for later

Before You Begin

Packt
13 Jul 2016
14 min read
In this article by Ashley Chiasson, the author of the book Mastering Articulate Storyline, provides you with an introduction to the purpose of this book, best practices related to e-learning product development. In this article, we will cover the following topics: Pushing Articulate Storyline to the limit Best practices How to be mindful of reusability Methods for organizing your project The differences between storyboarding and rapid development Ways of streamlining your development (For more resources related to this topic, see here.) Pushing Articulate Storyline to the limit The purpose of this book is really to get you comfortable with pushing Articulate Storyline to its limits. Doing this may also broaden your imagination, allowing you to push your creativity to its limits. There are so many things you can do within Storyline, and a lot of those features, interactions, or functions are overlooked because they just aren't used all that often. Often times, the basic functionality overshadows the more advanced functions because they're easier, they often address the need, and they take less time to learn. That's understandable, but this book is going to open your mind to many more things possible within this tool. You'll get excited, frustrated, excited again, and probably frustrated a few more times, but with all of the practical activities for you to follow along with (and or reverse engineer), you'll be mastering Articulate Storyline and pushing it to its limits within no time! If you don't quite get one of the concepts explained, don't worry. You'll always have access to this book and the activity downloads as a handy reference or refresher. Best practices Before you get too far into your development, it's important to take some steps to streamline your approach by establishing best practices—doing this will help you become more organized and efficient. Everyone has their own process, so this is by no means a prescribed format for the proper way of doing things. These are just some recommendations, from personal experience, that have proven effective as an e-learning developer. Please note that these best practices are not necessarily Storyline-related, but are best practices to consider ahead of development within any e-learning project. Your best practices will likely be project-specific in terms of how your clients or how your organization's internal processes work. Sometimes you'll be provided with storyboard ahead of development and sometimes you'll be expected to rapidly develop. Sometimes you'll be provided with all multimedia ahead of development and sometimes you'll be provided with multimedia after an alpha review. You may want to do a content dump at the beginning of your development process or you may want to work through each slide from start until finish before moving on. Through experience and observation of what other developers are doing, you will learn how to define and adapt your best practices. When a new project comes along, it's always a good idea to employ some form of organization. There are many great reasons for this, some of which include being mindful of reusability, maintaining and organizing project and file structure, and streamlining your development process. This article aims to provide you with as much information as necessary to ensure that you are effectively organizing your projects for enhanced efficiency and an understanding of why these methods should always be considered best practices. How to be mindful of reusability When I think about reusability in e-learning, I think about objects and content that can be reused in a variety of contexts. Developers often run into this when working on large projects or in industries that involve trade-specific content. When working on multiple projects within one sector, you may come across assets used previously in one course (for example, a 3D model of an aircraft) that may be reused in another course of the same content base. Being able to reuse content and/or assets can come in handy as it can save you resources in the long run. Reusing previously established assets (if permitted to do so, of course) would reduce the amount of development time various departments and/or individuals need to spend. Best practices for reusability might include creating your own content repository and defining a file naming convention that will make it easy for you to quickly find what you're looking for. If you're extra savvy, you can create a metadata-coded database, but that might require a lot more effort than you have available. While it does take extra time to either come up with a file naming convention or apply metadata tagging to all assets within your repository, the goal is to make your life easier in the long run. Much like the dreaded administrative tasks required of small business owners, it's not the most sought-after task, but it's a necessary one, especially if you truly want to optimize efficiency! Within Articulate Storyline, you may want to maintain a repository of themes and interactions as you can use elements of these assets for future development and they can save you a lot of time. Most projects, in the early stages, require an initial prototype for the client to sign off on the general look and feel. In this prototyping phase, having a repository of themes and interactions can really make the process a lot smoother because you can call on previous work in order to easily facilitate the elemental design of a new project. Storyline allows you to import content from many sources (for example, PowerPoint, Articulate Engage, Articulate Quizmaker, and more), so you don't feel limited to just reusing Storyline interactions and/or themes. Just structure your repository in an organized manner and you will be able to easily locate the files and file types that you're looking to use at a later date. Another great thing Articulate Storyline is good for when it comes to reusability is Question Banks! Most courses contain questions, knowledge checks, assessments, or whatever you want to call them, but all too seldom do people think about compiling these questions in one neat area for reuse later on. Instead, people often add new question slides, add the question, and go on their merry development way. If you're one of those people, you need to STOP. Your life will be entirely changed by the concept of question banks—if not entirely, at least a little bit, or at least the part of your life that dabbles in development will be changed in some small way. Question banks allow you to create a bank of questions (who would have thought) and call on these questions at any time for placement within your story—reusability at its finest, at least in Storyline. Methods for organizing your project Organizing your project is a necessary evil. Surely there is someone out there who loves this process, but for others who just want to develop all day and all night, there may be a smaller emphasis placed on organization. However, you can take some simple steps to organize your project that can be reused for future projects. Within Storyline, the organizational emphasis of this article will be placed on using Story View and optimizing the use of scenes. These are two elements of Storyline that, depending on the size of your project, can make a world of difference when it comes to making sense of all the content you've authored in terms of making the structure of your content more palatable. Using the Story View Story View is such a great feature of Storyline! It provides you with a bird's eye view of your project, or story, and essentially shows you a visual blueprint of all scenes and slides. This is particularly helpful in projects that involve a lot of branching. Instead of seeing the individual parts, you're seeing the parts as they represent the whole—the Gestalt psychology would be proud! You can also use Story View to plan out the movement of existing scenes or slides if content isn't lining up quite the way you want it to: Optimizing scene use Scenes play a very big role in maintaining organization within your story. They serve to group slides into smaller segments of the entire story and are typically defined using logical breaks. However, it's all up to you how you decide to group your slides. If the story you're working on consists of multiple topics or modules, each topic or module would logically become a new scene. Visually, scenes work in tandem with Story View in that while you're in Story View, you can clearly see the various scenes and move things around appropriately. Functionally, scenes serve to create submenus in the main Storyline menu, but you can change this if you don't want to see each scene delineated in the menu. From an organization and control perspective, scenes can help you reel in unwieldy and overwhelming content. This particularly comes in handy with large courses, where you can easily lose your place when trying to track down a specific slide of a scene, for example, in a sea of 150 slides. In this sense, scenes allow you to chunk content into more manageable scenes within your story and will likely allow you to save on development and revision time. Using scenes will also help when it comes to previewing your story. Instead of having to wait to load 150 slides each time you preview, you can choose to preview a scene and will only have to wait for the slides in that scene to load—perhaps 15 slides of the entire course instead of 150. Scenes really are a magical thing! Asset management Asset management is just what it sounds like—managing your assets. Now, your assets may come in many forms, for example, media assets (your draft and/or completed images/video/audio), customer furnished assets (files provided by the client, which could be raw images/video/audio/PowerPoint/Word documents, and so on.), or content output (outputs from whichever authoring tool you're using). If you've worked on large projects, you will likely relate to how unwieldy these assets can become if you don't have a system in place for keeping everything organized. This is where the management element comes into play. Structuring your folders Setting up a consistent folder structure is really important when it comes to managing your assets. Structuring your folders may seem like a daunting administrative task, but once you determine a structure that works well for you and your projects, you can copy the structure for each project. So yeah, there is a little bit of up front effort, but the headache it will save you in the long run when it comes to tracking down assets for reuse is worth the effort! Again, this folder structure is in no way prescribed, but it is a recommendation, and one that has worked well. It looks something like the following: It may look overwhelming, but it's really not that bad. There are likely more elements accounted here than you may need for your project, but all main elements are included, and you can customize it as you see fit. This is how the folder structure breaks down: Project Folder: 100 Project Management Depending on how large the project is, this folder may have subfolders, for example: Meeting Minutes Action Tracking Risk Management Contracts Invoices 200 Development This folder typically contains subfolders related to my development, for example: Client-Furnished Information (CFI) Scripts and Storyboards Scripts Audio Narration Storyboards Media Video Audio Draft Audio Final Audio Images Flash Output Quality Assurance 300 Client This folder will include anything sent to the client for review, for example: Delivered Review Comments Final Within these folders, there may be other subfolders, but this is the general structure that has proven effective for me. When it comes to filenames, you may wish to follow a file naming convention dictated by the client or follow an internal file naming convention, which indicates the project, type of media, asset number, and version number, for example, PROJECT_A_001_01. If there are multiple courses for one project, you may also want to add an arbitrary course number to keep tabs on which asset belongs to which course. Once a file naming convention has been determined, these filenames will be managed within a spreadsheet, housed within the main 200>Media folder. The basic goal of this recommended folder structure is to organize your course assets and break them into three groups to further help with the organization. If this folder structure sounds like it might be functional for your purposes, go ahead and download a ready-made version of the folder structure. Storyboarding and rapid prototyping Storyboarding and rapid prototyping will likely make their way into your development glossary, if they haven't already, so they're important concepts to discuss when it comes to streamlining your development. Through experience, you'll learn how each of these concepts can help you become more efficient, and this section will discuss some benefits and detriments of both. Storyboarding is a process wherein the sequence of an e-learning project is laid out visually or textually. This process allows instructional designers to layout the e-learning project to indicate screens, topics, teaching points, onscreen text, and media descriptions. However, storyboards may not be limited to just those elements. There are many variations. However, the previously mentioned elements are most commonly represented within a storyboard. Other elements may include audio narration script, assessment items, high-level learning objectives, filenames, source/reference images, or screenshots illustrating the anticipated media asset or screen to be developed. The good thing about storyboarding is that it allows you to organize the content and provides documentation that may be reviewed prior to entry into an authoring environment. Storyboarding provides subject matter experts with a great opportunity for ironing out textual content to ensure accuracy, and can help developers in terms of reducing small text changes once in the authoring environment. These small changes are just that, small, but they also add up quickly and can quickly throw a wrench into your well-oiled, efficient, development machine. Storyboarding also has its downsides. It is an extra step in the development process and may be perceived, by potential clients, as an additional and unnecessary expense. Because storyboards do not depict the final product, reviewers may have difficulty in reviewing content as they cannot contextualize without being able to see the final product. This can be especially true when it comes to reviewing a storyboard involving complex branching scenarios. Rapid prototyping on the other hand involves working within the authoring environment, in this case Articulate Storyline, to develop your e-learning project, slide by slide. This may occur in developing an initial prototype, but may also occur throughout the lifecycle of the project as a means for eliminating the step of storyboarding from the development process. With rapid prototyping, reviewers have added context of visuals and functionality. They are able to review a proposed version of the end product, and as such, their review comments may become more streamlined and their review may take less time to conduct. However, reviewers may also get overloaded by visual stimuli, which may hamper their ability to review for content accuracy. Additionally, rapid prototyping may become less rapid when it comes to revising complex interactions. In both situations, there are clear advantages and disadvantages, so a best practice should be to determine an appropriate way ahead with regard to development and understand which process may best suit the project for which you are authoring. Streamlining your development Storyline provides you with so many ways to streamline your development. A sampling of topics discussed includes the following: Setting up auto-save Setting up defaults Keyboard shortcuts Dockable panels Using the format painter Using the eyedropper Cue points Duplicating objects Naming objects Summary This article introduced you to the concept of pushing Articulate Storyline 2 to its limits, provided you with some tips and tricks when it comes to best practices and being mindful of reusability, identified a functional folder structure and explained the importance that organization will play in your Storyline development, explained the difference between storyboarding and rapid prototyping, and gave you a taste of some topics that may help you streamline your development process. You are now armed with all of my best advice for staying productive and organized, and you should be ready to start a new Storyline project! Resources for Article: Further resources on this subject: Data Science with R [article] Sizing and Configuring your Hadoop Cluster [article] Creating Your Own Theme—A Wordpress Tutorial [article]
Read more
  • 0
  • 0
  • 12778

article-image-quizzes-and-interactions-camtasia-studio
Packt
21 Aug 2014
12 min read
Save for later

Quizzes and Interactions in Camtasia Studio

Packt
21 Aug 2014
12 min read
This article by David B. Demyan, the author of the book eLearning with Camtasia Studio, covers the different types of interactions, description of how interactions are created and how they function, and the quiz feature. In this article, we will cover the following topics specific topics: The types of interactions available in Camtasia Studio Video player requirements Creating simple action hotspots Using the quiz feature (For more resources related to this topic, see here.) Why include learner interactions? Interactions in e-learning support cognitive learning, the application of behavioral psychology to teaching. Students learn a lot when they perform an action based on the information they are presented. Without exhausting the volumes written about this subject, your own background has probably prepared you for creating effective materials that support cognitive learning. To boil it down for our purposes, you present information in chunks and ask learners to demonstrate whether they have received the signal. In the classroom, this is immortalized as a teacher presenting a lecture and asking questions, a basic educational model. In another scenario, it might be an instructor showing a student how to perform a mechanical task and then asking the student to repeat the same task. We know from experience that learners struggle with concepts if you present too much information too rapidly without checking to see if they understand it. In e-learning, the most effective ways to prevent confusion involve chunking information into small, digestible bites and mapping them into an overall program that allows the learner to progress in a logical fashion, all the while interacting and demonstrating comprehension. Interaction is vital to keep your students awake and aware. Interaction, or two-way communication, can take your e-learning video to the next level: a true cognitive learning experience. Interaction types While Camtasia Studio does not pretend to be a full-featured interactive authoring tool, it does contain some features that allow you to build interactions and quizzes. This section defines those features that support learners to take action while viewing an e-learning video when you request them for an interaction. There are three types of interactions available in Camtasia Studio: Simple action hotspots Branching hotspots Quizzes You are probably thinking of ways these techniques can help support cognitive learning. Simple action hotspots Hotspots are click areas. You indicate where the hotspot is using a visual cue, such as a callout. Camtasia allows you to designate the area covered by the callout as a hotspot and define the action to take when it is clicked. An example is to take the learner to another time in the video when the hotspot is clicked. Another click could take the learner back to the original place in the video. Quizzes Quizzes are simple questions you can insert in the video, created and implemented to conform to your testing strategy. The question types available are as follows: Multiple choice Fill in the blanks Short answers True/false Video player requirements Before we learn how to create interactions in Camtasia Studio, you should know some special video player requirements. A simple video file playing on a computer cannot be interactive by itself. A video created and produced in Camtasia Studio without including some additional program elements cannot react when you click on it except for what the video player tells it to do. For example, the default player for YouTube videos stops and starts the video when you click anywhere in the video space. Click interactions in videos created with Camtasia are able to recognize where clicks occur and the actions to take. You provide the click instructions when you set up the interaction. These instructions are required, for example, to intercept the clicking action, determine where exactly the click occurred, and link that spot with a command and destination. These click instructions may be any combination of HyperText Markup Language (HTML), HTML5, JavaScript, and Flash ActionScript. Camtasia takes care of creating the coding behind the scenes, associated with the video player being used. In the case of videos produced with Camtasia Studio, to implement any form of interactivity, you need to select the default Smart Player output options when producing the video. Creating simple hotspots The most basic interaction is clicking a hotspot layered over the video. You can create an interactive hotspot for many purposes, including the following: Taking learners to a specific marker or frame within the video, as determined on the timeline Allowing learners to replay a section of the video Directing learners to a website or document to view reference material Showing a pop up with additional information, such as a phone number or web link Try it – creating a hotspot If you are building the exercise project featured in this book, let's use it to create an interactive hotspot. The task in this exercise is to pause the video and add a Replay button to allow viewers to review a task. After the replay, a prompt will be added to resume the video from where it was paused. Inserting the Replay/Continue buttons The first step is to insert a Replay button to allow viewers to review what they just saw or continue without reviewing. This involves adding two hotspot buttons on the timeline, which can be done by performing the following steps: Open your exercise project in Camtasia Studio or one of your own projects where you can practice. Position the play head right after the part where text is shown being pasted into the CuePrompter window. From the Properties area, select Callouts from the task tabs above the timeline. In the Shape area, select Filled Rounded Rectangle (at the upper-right corner of the drop-down selection). A shape is added to the timeline. Set the Fade in and Fade out durations to about half a second. Select the Effects dropdown and choose Style. Choose the 3D Edge style. It looks like a raised button. Set any other formatting so the button looks the way you want in the preview window. In the Text area, type your button text. For the sample project, enter Replay Copy & Paste. Select the button in the preview window and make a copy of the button. You can use Ctrl + C to copy and Ctrl + V to paste the button. In the second copy of the button, select the text and retype it as Continue. It should be stacked on the timeline as shown in the following screenshot: Select the Continue button in the preview window and drag it to the right-hand side, at the same height and distance from the edge. The final placement of the buttons is shown in the sample project. Save the project. Adding a hotspot to the Continue button The buttons are currently inactive images on the timeline. Viewers could click them in the produced video, but nothing would happen. To make them active, enable the Hotspot properties for each button. To add a hotspot to the Continue button, perform the following steps: With the Continue button selected, select the Make hotspot checkbox in the Callouts panel. Click on the Hotspot Properties... button to set properties for the callout button. Under Actions, make sure to select Click to continue. Click on OK. The Continue button now has an active hotspot assigned to it. When published, the video will pause when the button appears. When the viewer clicks on Continue, the video will resume playing. You can test the video and the operation of the interactive buttons as described later in this article. Adding a hotspot to the Replay button Now, let's move on to create an action for the Replay copy & paste button: Select the Replay copy & paste button in the preview window. Select the Make hotspot checkbox in the Callouts panel. Click on the Hotspot properties... button. Under Actions, select Go to frame at time. Enter the time code for the spot on the timeline where you want to start the replay. In the sample video, this is around 0:01:43;00, just before text is copied in the script. Click on OK. Save the project. The Replay copy & paste button now has an active hotspot assigned to it. Later, when published, the video will pause when the button appears. When viewers click on Replay copy & paste, the video will be repositioned at the time you entered and begin playing from there. Using the quiz feature A quiz added to a video sets it apart. The addition of knowledge checks and quizzes to assess your learners' understanding of the material presented puts the video into the true e-learning category. By definition, a knowledge check is a way for the student to check their understanding without worrying about scoring. Typically, feedback is given to the student for them to better understand the material, the question, and their answer. The feedback can be terse, such as correct and incorrect, or it can be verbose, informing if the answer is correct or not and perhaps giving additional information, a hint, or even the correct answers, depending on your strategy in creating the knowledge check. A quiz can be in the same form as a knowledge check but a record of the student's answer is created and reported to an LMS or via an e-mail report. Feedback to the student is optional, again depending on your testing strategy. In Camtasia Studio, you can insert a quiz question or set of questions anywhere on the timeline you deem appropriate. This is done with the Quizzing task tab. Try it – inserting a quiz In this exercise, you will select a spot on the timeline to insert a quiz, enable the Quizzing feature, and write some appropriate questions following the sample project, Using CuePrompter. Creating a quiz Place your quiz after you have covered a block of information. The sample project, Using CuePrompter, is a very short task-based tutorial, showing some basic steps. Assume for now that you are teaching a course on CuePrompter and need to assess students' knowledge. I believe a good place for a quiz is after the commands to scroll forward, speed up, slow down, and scroll reverse. Let's give it a try with multiple choice and true/false questions: Position the play head at the appropriate part of the timeline. In the sample video, the end of the scrolling command description is at about 3 minutes 12 seconds. Select Quizzing in the task tabs. If you do not see the Quizzing tab above the timeline, select the More tab to reveal it. Click on the Add quiz button to begin adding questions. A marker appears on the timeline where your quiz will appear during the video, as illustrated in the following screenshot: In the Quiz panel, add a quiz name. In the sample project, the quiz is entitled CuePrompter Commands. Scroll down to Question type. Make sure Multiple Choice is selected from the dropdown. In the Question box, type the question text. In the sample project, the first question is With text in the prompter ready to go, the keyboard control to start scrolling forward is _________________. In the Answers box, double-click on the checkbox text that says Default Answer Text. Retype the answer Control-F. In the next checkbox text that says <Type an answer choice here>, double-click on it and add the second possible answer, Spacebar. Check the box next to it to indicate that it is the correct answer. Add two more choices: Alt-Insert and Tab. Your Quiz panel should look like the following screenshot: Click on Add question. From the Question type dropdown, select True/False. In the Question box, type You can stop CuePrompter with the End key. In Answers, select False. For the final question, click on Add question again. From the Question type dropdown, select Multiple Choice. In the Question box, type Which keyboard command tells CuePrompter to reverse?. Enter the four possible answers: Left arrow Right arrow Down arrow Up arrow Select Down arrow as the correct answer. Save the project. Now you have entered three questions and answer choices, while indicating the choice that will be scored correct if selected. Next, preview the quiz to check format and function. Previewing the quiz Camtasia Studio allows you to preview quizzes for correct formatting, wording, and scoring. Continue to follow along in the exercise project and perform the following steps: Leave checkmarks in the Score quiz and Viewer can see answers after submitting boxes. Click on the Preview button. A web page opens in your Internet browser showing the questions, as shown in the following screenshot: Select an answer and click on Next. The second quiz question is displayed. Select an answer and click on Next. The third quiz question is displayed. Select an answer and click on Submit Answers. As this is the final question, there is no Next. Since we left the Score quiz and Viewer can see answers after submitting options selected, the learner receives a prompt, as shown in the following screenshot: Click on View Answers to review the answers you gave. Correct responses are shown with a green checkmark and incorrect ones are shown with a red X mark. If you do not want your learners to see the answers, remove the checkmark from Viewer can see answers after submitting. Exit the browser to discontinue previewing the quiz. Save the project. This completes the Try it exercise for inserting and previewing a quiz in your video e-learning project. Summary In this article, we learned different types of interactions, video player requirements, creating simple action hotspots, and inserting and previewing a quiz. Resources for Article: Further resources on this subject: Introduction to Moodle [article] Installing Drupal [article] Make Spacecraft Fly and Shoot with Special Effects using Blender 3D 2.49 [article]
Read more
  • 0
  • 0
  • 12758

article-image-what-flux
Packt
27 Apr 2016
27 min read
Save for later

What is Flux?

Packt
27 Apr 2016
27 min read
In this article by Adam Boduch, author of Flux Architecture covers the basic idea of Flux. Flux is supposed to be this great new way of building complex user interfaces that scale well. At least that's the general messaging around Flux, if you're only skimming the Internet literature. But, how do we define this great new way of building user interfaces? What makes it superior to other more established frontend architectures? The aim of this article is to cut through the sales bullet points and explicitly spell out what Flux is, and what it isn't, by looking at the patterns that Flux provides. And since Flux isn't a software package in the traditional sense, we'll go over the conceptual problems that we're trying to solve with Flux. Finally, we'll close the article by walking through the core components found in any Flux architecture, and we'll install the Flux npm package and write a hello world Flux application right away. Let's get started. (For more resources related to this topic, see here.) Flux is a set of patterns We should probably get the harsh reality out of the way first—Flux is not a software package. It's a set of architectural patterns for us to follow. While this might sound disappointing to some, don't despair—there's good reasons for not implementing yet another framework. Throughout the course of this book, we'll see the value of Flux existing as a set of patterns instead of a de facto implementation. For now, we'll go over some of the high-level architectural patterns put in place by Flux. Data entry points With traditional approaches to building frontend architectures, we don't put much thought into how data enters the system. We might entertain the idea of data entry points, but not in any detail. For example, with MVC (Model View Controller) architectures, the controller is supposed control the flow of data. And for the most part, it does exactly that. On the other hand, the controller is really just about controlling what happens after it already has the data. How does the controller get data in the first place? Consider the following illustration: At first glance, there's nothing wrong with this picture. The data flow, represented by the arrows, is easy to follow. But where does the data originate? For example, the view can create new data and pass it to the controller, in response to a user event. A controller can create new data and pass it to another controller, depending on the composition of our controller hierarchy. What about the controller in question—can it create data itself and then use it? In a diagram such as this one, these questions don't have much virtue. But, if we're trying to scale an architecture to have hundreds of these components, the points at which data enters the system becomes very important. Since Flux is used to build architectures that scale, it considers data entry points an important architectural pattern. Managing state State is one of those realities we need to cope with in frontend development. Unfortunately, we can't compose our entire application of pure functions with no side effects for two reasons. First, our code needs to interact with the DOM interface, in one way or another. This is how the user sees changes in the UI. Second, we don't store all our application data in the DOM (at least we shouldn't do this). As time passes and the user interacts with the application, this data will change. There's no cut-and-dry approach to managing state in a web application, but there are several ways to limit the amount of state changes that can happen, and enforce how they happen. For example, pure functions don't change the state of anything, they can only create new data. Here's an example of what this looks like: As you can see, there's no side effects with pure functions because no data changes state as a result of calling them. So why is this a desirable trait, if state changes are inevitable? The idea is to enforce where state changes happen. For example, perhaps we only allow certain types of components to change the state of our application data. This way, we can rule out several sources as the cause of a state change. Flux is big on controlling where state changes happen. Later on in the article, we'll see how Flux stores manage state changes. What's important about how Flux manages state is that it's handled at an architectural layer. Contrast this with an approach that lays out a set of rules that say which component types are allowed to mutate application data—things get confusing. With Flux, there's less room for guessing where state changes take place. Keeping updates synchronous Complimentary to data entry points is the notion of update synchronicity. That is, in addition to managing where the state changes originate from, we have to manage the ordering of these changes relative to other things. If the data entry points are the what of our data, then synchronously applying state changes across all the data in our system is the when. Let's think about why this matters for a moment. In a system where data is updated asynchronously, we have to account for race conditions. Race conditions can be problematic because one piece of data can depend on another, and if they're updated in the wrong order, we see cascading problems, from one component to another. Take a look at this diagram, which illustrates this problem: When something is asynchronous, we have no control over when that something changes state. So, all we can do is wait for the asynchronous updates to happen, and then go through our data and make sure all of our data dependencies are satisfied. Without tools that automatically handle these dependencies for us, we end up writing a lot of state-checking code. Flux addresses this problem by ensuring that the updates that take place across our data stores are synchronous. This means that the scenario illustrated in the preceding diagram isn't possible. Here's a better visualization of how Flux handles the data synchronization issues that are typical of JavaScript applications today: Information architecture It's easy to forget that we work in information technology and that we should be building technology around information. In recent times, however, we seem to have moved in the other direction, where we're forced to think about implementation before we think about information. More often than not, the data exposed by the sources used by our application, don't have what the user needs. It's up to our JavaScript to turn this raw data into something consumable by the user. This is our information architecture. Does this mean that Flux is used to design information architectures as opposed to a software architecture? This isn't the case at all. In fact, Flux components are realized as true software components that perform actual computations. The trick is that Flux patterns enable us to think about information architecture as a first-class design consideration. Rather than having to sift through all sorts of components and their implementation concerns, we can make sure that we're getting the right information to the user. Once our information architecture takes shape, the larger architecture of our application follows, as a natural extension to the information we're trying to communicate to our users. Producing information from data is the difficult part. We have to distill many sources of data into not only information, but information that's also of value to the user. Getting this wrong is a huge risk for any project. When we get it right, we can then move on to the specific application components, like the state of a button widget, and so on. Flux architectures keep data transformations confined to their stores. A store is an information factory—raw data goes in and new information comes out. Stores control how data enters the system, the synchronicity of state changes, and they define how the state changes. When we go into more depth on stores as we progress through the book, we'll see how they're the pillars of our information architecture. Flux isn't another framework Now that we've explored some of the high-level patterns of Flux, it's time to revisit the question: what is Flux again? Well, it is just a set of architectural patterns we can apply to our frontend JavaScript applications. Flux scales well because it puts information first. Information is the most difficult aspect of software to scale; Flux tackles information architecture head on. So, why aren't Flux patterns implemented as a Framework? This way, Flux would have a canonical implementation for everyone to use; and like any other large scale open source project, the code would improve over time as the project matures. The main problem is that Flux operates at an architectural level. It's used to address information problems that prevent a given application from scaling to meet user demand. If Facebook decided to release Flux as yet another JavaScript framework, it would likely have the same types of implementation issues that plague other frameworks out there. For example, if some component in a framework isn't implemented in a way that best suits the project we're working on, then it's not so easy to implement a better alternative, without hacking the framework to bits. What's nice about Flux is that Facebook decided to leave the implementation options on the table. They do provide a few Flux component implementations, but these are reference implementations. They're functional, but the idea is that they're a starting point for us to understand the mechanics of how things such as dispatchers are expected to work. We're free to implement the same Flux architectural pattern as we see it. Flux isn't a framework. Does this mean we have to implement everything ourselves? No, we do not. In fact, developers are implementing Flux frameworks and releasing them as open source projects. Some Flux libraries stick more closely to the Flux patterns than others. These implementations are opinionated, and there's nothing wrong with using them if they're a good fit for what we're building. The Flux patterns aim to solve generic conceptual problems with JavaScript development, so you'll learn what they are before diving into Flux implementation discussions. Flux solves conceptual problems If Flux is simply a collection of architectural patterns instead of a software framework, what sort of problems does it solve? In this section, we'll look at some of the conceptual problems that Flux addresses from an architectural perspective. These include unidirectional data flow, traceability, consistency, component layering, and loosely coupled components. Each of these conceptual problems pose a degree of risk to our software, in particular, the ability to scale it. Flux helps us get out in front of these issues as we're building the software. Data flow direction We're creating an information architecture to support the feature-rich application that will ultimately sit on top of this architecture. Data flows into the system, and will eventually reach an endpoint, terminating the flow. It's what happens in between the entry point and the termination point that determines the data flow within a Flux architecture. This is illustrated here: Data flow is a useful abstraction, because it's easy to visualize data as it enters the system and moves from one point to another. Eventually, the flow stops. But before it does, several side effects happen along the way. It's that middle block in the preceding diagram that's concerning, because we don't know exactly how the data-flow reached the end. Let's say that our architecture doesn't pose any restrictions on data flow. Any component is allowed to pass data to any other component, regardless of where that component lives. Let's try to visualize this setup: As you can see, our system has clearly defined entry and exit points for our data. This is good because it means that we can confidently say that the data flows through our system. The problem with this picture is with how the data flows between the components of the system. There's no direction, or rather, it's multidirectional. This isn't a good thing. Flux is a unidirectional data flow architecture. This means that the preceding component layout isn't possible. The question is—why does this matter? At times, it might seem convenient to be able to pass data around in any direction, that is, from any component to any other component. This in and of itself isn't the issue—passing data alone doesn't break our architecture. However, when data moves around our system in more than one direction, there's more opportunity for components to fall out of sync with one another. This simply means that if data doesn't always move in the same direction, there's always the possibility of ordering bugs. Flux enforces the direction of data flows, and thus eliminates the possibility of components updating themselves in an order that breaks the system. No matter what data has just entered the system, it'll always flow through the system in the same order as any other data, as illustrated here: Predictable root cause With data entering our system and flowing through our components in one direction, we can more easily trace any effect to it's cause. In contrast, when a component sends data to any other component residing in any architectural layer, it's a lot more difficult to figure how the data reached it's destination. Why does this matter? Debuggers are sophisticated enough that we can easily traverse any level of complexity during runtime. The problem with this notion is that it presumes we only need to trace what's happening in our code for the purposes of debugging. Flux architectures have inherently predictable data flows. This is important for a number of design activities and not just debugging. Programmers working on Flux applications will begin to intuitively sense what's going to happen. Anticipation is key, because it let's us avoid design dead-ends before we hit them. When the cause and effect are easy to tease out, we can spend more time focusing on building application features—the things the customers care about. Consistent notifications The direction in which we pass data from component to component in Flux architectures should be consistent. In terms of consistency, we also need to think about the mechanism used to move data around our system. For example, publish/subscribe (pub/sub) is a popular mechanism used for inter-component communication. What's neat about this approach is that our components can communicate with one another, and yet, we're able to maintain a level of decoupling. In fact, this is fairly common in frontend development because component communication is largely driven by user events. These events can be thought of as fire-and-forget. Any other components that want to respond to these events in some way, need to take it upon themselves to subscribe to the particular event. While pub/sub does have some nice properties, it also poses architectural challenges, in particular, scaling complexities. For example, let's say that we've just added several new components for a new feature. Well, in which order do these components receive update messages relative to pre-existing components? Do they get notified after all the pre-existing components? Should they come first? This presents a data dependency scaling issue. The other challenge with pub-sub is that the events that get published are often fine grained to the point where we'll want to subscribe and later unsubscribe from the notifications. This leads to consistency challenges because trying to code lifecycle changes when there's a large number of components in the system is difficult and presents opportunities for missed events. The idea with Flux is to sidestep the issue by maintaining a static inter-component messaging infrastructure that issues notifications to every component. In other words, programmers don't get to pick and choose the events their components will subscribe to. Instead, they have to figure out which of the events that are dispatched to them are relevant, ignoring the rest. Here's a visualization of how Flux dispatches events to components: The Flux dispatcher sends the event to every component; there's no getting around this. Instead of trying to fiddle with the messaging infrastructure, which is difficult to scale, we implement logic within the component to determine whether or not the message is of interest. It's also within the component that we can declare dependencies on other components, which helps influence the ordering of messages. Simple architectural layers Layers can be a great way to organize an architecture of components. For one thing, it's an obvious way to categorize the various components that make up our application. For another thing, layers serve as a means to put constraints around communication paths. This latter point is especially relevant to Flux architectures since it's imperative that data flow in one direction. It's much easier to apply constraints to layers than it is to individual components. Here is an illustration of Flux layers: This diagram isn't intended to capture the entire data flow of a Flux architecture, just how data flows between the main three layers. It also doesn't give any detail about what's in the layers. Don't worry, the next section gives introductory explanations of the types of Flux components and the communication that happens between the layers is the focus of this entire book. As you can see, the data flows from one layer to the next, in one direction. Flux only has a few layers, and as our applications scale in terms of component counts, the layer counts remains fixed. This puts a cap on the complexity involved with adding new features to an already large application. In addition to constraining the layer count and the data flow direction, Flux architectures are strict about which layers are actually allowed to communicate with one another. For example, the action layer could communicate with the view layer, and we would still be moving in one direction. We would still have the layers that Flux expects. However, skipping a layer like this is prohibited. By ensuring that layers only communicate with the layer directly beneath it, we can rule out bugs introduced by doing something out-of-order. Loosely coupled rendering One decision made by the Flux designers that stands out is that Flux architectures don't care how UI elements are rendered. That is to say, the view layer is loosely coupled to the rest of the architecture. There are good reasons for this. Flux is an information architecture first, and a software architecture second. We start with the former and graduate toward the latter. The challenge with view technology is that it can exert a negative influence on the rest of the architecture. For example, one view has a particular way of interacting with the DOM. Then, if we've already decided on this technology, we'll end up letting it influence the way our information architecture is structured. This isn't necessarily a bad thing, but it can lead to us making concessions about the information we ultimately display to our users. What we should really be thinking about is the information itself and how this information changes over time. What actions are involved that bring about these changes? How is one piece of data dependent on another piece of data? Flux naturally removes itself from the browser technology constraints of the day so that we can focus on the information first. It's easy to plug views into our information architecture as it evolves into a software product. Flux components In this section, we'll begin our journey into the concepts of Flux. These concepts are the essential ingredients used in formulating a Flux architecture. While there's no detailed specifications for how these components should be implemented, they nevertheless lay the foundation of our implementation. This is a high-level introduction to the components we'll be implementing throughout this book. Action Actions are the verbs of the system. In fact, it's helpful if we derive the name of an action directly from a sentence. These sentences are typically statements of functionality; something we want the application to do. Here are some examples: Fetch the session Navigate to the settings page Filter the user list Toggle the visibility of the details section These are simple capabilities of the application, and when we implement them as part of a Flux architecture, actions are the starting point. These human-readable action statements often require other new components elsewhere in the system, but the first step is always an action. So, what exactly is a Flux action? At it's simplest, an action is nothing more than a string—a name that helps identify the purpose of the action. More typically, actions consist of a name and a payload. Don't worry about the payload specifics just yet—as far as actions are concerned, they're just opaque pieces of data being delivered into the system. Put differently, actions are like mail parcels. The entry point into our Flux system doesn't care about the internals of the parcel, only that they get to where they need to go. Here's an illustration of actions entering a Flux system: This diagram might give the impression that actions are external to Flux when in fact, they're an integral part of the system. The reason this perspective is valuable is because it forces us to think about actions as the only means to deliver new data into the system. Golden Flux Rule: If it's not an action, it can't happen. Dispatcher The dispatcher in a Flux architecture is responsible for distributing actions to the store components (we'll talk about stores next). A dispatcher is actually kind of like a broker—if actions want to deliver new data to a store, they have to talk to the broker, so it can figure out the best way to deliver them. Think about a message broker in a system like RabbitMQ. It's the central hub where everything is sent before it's actually delivered. Here is a diagram depicting a Flux dispatcher receiving actions and dispatching them to stores: In a Flux application, there's only one dispatcher. It can be thought of more as a pseudo layer than an explicit layer. We know the dispatcher is there, but it's not essential to this level of abstraction. What we're concerned about at an architectural level, is making sure that when a given action is dispatched, we know that it's going to make it's way to every store in the system. Having said that, the dispatcher's role is critical to how Flux works. It's the place where store callback functions are registered. And it's how data dependencies are handled. Stores tell the dispatcher about other stores that it depends on, and it's up to the dispatcher to make sure these dependencies are properly handled. Golden Flux Rule: The dispatcher is the ultimate arbiter of data dependencies. Store Stores are where state is kept in a Flux application. Typically, this means the application data that's sent to the frontend from the API. However, Flux stores take this a step further and explicitly model the state of the entire application. For now, just know that stores are where state that matters can be found. Other Flux components don't have state—they have implicit state at the code level, but we're not interested in this, from an architectural point of view. Actions are the delivery mechanism for new data entering the system. The term new data doesn't imply that we're simply appending it to some collection in a store. All data entering the system is new in the sense that it hasn't been dispatched as an action yet—it could in fact result in a store changing state. Let's look at a visualization of an action that results in a store changing state: The key aspect of how stores change state is that there's no external logic that determines a state change should happen. It's the store, and only the store, that makes this decision and then carries out the state transformation. This is all tightly encapsulated within the store. This means that when we need to reason about a particular information, we need not look any further than the stores. They're their own boss—they're self-employed. Golden Flux Rule: Stores are where state lives, and only stores themselves can change this state. View The last Flux component we're going to look at in this section is the view, and it technically isn't even a part of Flux. At the same time, views are obviously a critical part of our application. Views are almost universally understood as the part of our architecture that's responsible for displaying data to the user—it's the last stop as data flows through our information architecture. For example, in MVC architectures, views take model data and display it. In this sense, views in a Flux-based application aren't all that different from MVC views. Where they differ markedly is with regard to handling events. Let's take a look at the following diagram: Here we can see the contrasting responsibilities of a Flux view, compared with a view component found in your typical MVC architecture. The two view types have similar types of data flowing into them—application data used to render the component and events (often user input). What's different between the two types of views is what flows out of them. The typical view doesn't really have any constraints in how it's event handler functions communicate with other components. For example, in response to a user clicking a button, the view could directly invoke behavior on a controller, change the state of a model, or it might query the state of another view. On the other hand, the Flux view can only dispatch new actions. This keeps our single entry point into the system intact and consistent with other mechanisms that want to change the state of our store data. In other words, an API response updates state in the exact same way as a user clicking a button does. Given that views should be restricted in terms of how data flows out of them (besides DOM updates) in a Flux architecture, you would think that views should be an actual Flux component. This would make sense insofar as making actions the only possible option for views. However, there's also no reason we can't enforce this now, with the benefit being that Flux remains entirely focused on creating information architectures. Keep in mind, however, that Flux is still in it's infancy. There's no doubt going to be external influences as more people start adopting Flux. Maybe Flux will have something to say about views in the future. Until then, views exist outside of Flux but are constrained by the unidirectional nature of Flux. Golden Flux Rule: The only way data flows out of a view is by dispatching an action. Installing the Flux package We'll get some of our boilerplate code setup tasks out of the way too, since we'll be using a similar setup throughout the book. We'll skip going over Node + NPM installation since it's sufficiently covered in great detail all over the Internet. We'll assume Node is installed and ready to go from this point forward. The first NPM package we'll need installed is Webpack. This is an advanced module bundler that's well suited for modern JavaScript applications, including Flux-based applications. We'll want to install this package globally so that the webpack command gets installed on our system: npm install webpack -g With Webpack in place, we can build each of the code examples that ship with this book. However, our project does require a couple local NPM packages, and these can be installed as follows: npm install flux babel-core babel-loader babel-preset-es2015 --save-dev The --save-dev option adds these development dependencies to our file, if one exists. This is just to get started—it isn't necessary to manually install these packages to run the code examples in this book. The examples you've downloaded already come with a package.json, so to install the local dependencies, simply run the following from within the same directory as the package.json file: npm install Now the webpack command can be used to build the example. Alternatively, if you plan on playing with the code, which is obviously encouraged, try running webpack --watch. This latter form of the command will monitor for file changes to the files used in the build, and run the build whenever they change. This is indeed a simple hello world to get us off to a running start, in preparation for the remainder of the book. We've taken care of all the boilerplate setup tasks by installing Webpack and it's supporting modules. Let's take a look at the code now. We'll start by looking at the markup that's used. <!doctype html> <html>   <head>     <title>Hello Flux</title>     <script src="main-bundle.js" defer></script>   </head>   <body></body> </html> Not a lot to it is there? There isn't even content within the body tag. The important part is the main-bundle.js script—this is the code that's built for us by Webpack. Let's take a look at this code now: // Imports the "flux" module. import * as flux from 'flux'; // Creates a new dispatcher instance. "Dispatcher" is // the only useful construct found in the "flux" module. const dispatcher = new flux.Dispatcher(); // Registers a callback function, invoked every time // an action is dispatched. dispatcher.register((e) => {   var p;   // Determines how to respond to the action. In this case,   // we're simply creating new content using the "payload"   // property. The "type" property determines how we create   // the content.   switch (e.type) {     case 'hello':       p = document.createElement('p');       p.textContent = e.payload;       document.body.appendChild(p);       break;     case 'world':       p = document.createElement('p');       p.textContent = `${e.payload}!`;       p.style.fontWeight = 'bold';       document.body.appendChild(p);       break;     default:       break;   } });   // Dispatches a "hello" action. dispatcher.dispatch({   type: 'hello',   payload: 'Hello' }); // Dispatches a "world" action. dispatcher.dispatch({   type: 'world',   payload: 'World' }); As you can see, there's not much too this hello world Flux application. In fact, the only Flux-specific component this code creates is a dispatcher. It then dispatches a couple of actions and the handler function that's registered to the store processes the actions. Don't worry that there's no stores or views in this example. The idea is that we've got the basic Flux NPM package installed and ready to go. Summary This article introduced you to Flux. Specifically, we looked at both what Flux is and what it isn't. Flux is a set of architectural patterns, that when applied to our JavaScript application, help with getting the data flow aspect of our architecture right. Flux isn't yet another framework used for solving specific implementation challenges, be it browser quirks or performance gains—there's a multitude of tools already available for these purposes. Perhaps the most important defining aspect of Flux are the conceptual problems it solves—things like unidirectional data flow. This is a major reason that there's no de facto Flux implementation. We wrapped the article up by walking through the setup of our build components used throughout the book. To test that the packages are all in place, we created a very basic hello world Flux application. Resources for Article: Further resources on this subject: Reactive Programming and the Flux Architecture [article] Advanced React [article] Qlik Sense's Vision [article]  
Read more
  • 0
  • 0
  • 12755

article-image-building-simple-address-book-application-jquery-and-php
Packt
19 Feb 2010
14 min read
Save for later

Building a Simple Address Book Application with jQuery and PHP

Packt
19 Feb 2010
14 min read
Let's get along. The application folder will be made up of five files: addressbook.css addressbook.html addressbook.php addressbook.js jquery.js Addressbook.css will contain the css for the interface styling, addressbook.html will contain the html source, addressbook.js contains  javascript codes, addressbook.php will mostly contain the server side code that will store the contacts to database, delete the contacts, provide updates and fetch the list of the contacts. Let's look through the HTML We include the scripts and the css file in the head tag in addressbook.html file. <title>sample address book</title> <link rel="stylesheet" type="text/css" href="addressbook.css"> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript" src="addressbook.js"></script> The code above includes the css for styling the application, jquery library for cross browser javascript and easy DOM access, and the addressbook.js contains functions that help user actions translated via javascript and ajax calls. The Body tag should contain this: <div id="Layer1"> <h1>Simple Address Book</h1> <div id="addContact">               <a href="#add-contact" id="add-contact-btn">Add Contact</a>               <table id="add-contact-form">               <tr>               <td>Names:</td><td><input type="text" name="names" id="names"  /></td>               </tr>               <tr>               <td>Phone Number:</td><td><input type="text" name="phone" id="phone"  /></td>               </tr>               <tr>               <td>&nbsp;</td><td>               <a href="#save-contact" id="save-contact-btn">Save Contact</a>               <a href="#cancel" id="cancel-btn">Cancel</a>               </td>               </tr>               </table> </div> <div id="notice">               notice box </div> <div id="list-title">My Contact List</div> <ul id="contacts-lists">         <li>mambe nanje [+23777545907] - <a href="#delete-id" class="deletebtn" contactid='1'> delete contact </a></li>         <li>mambe nanje [+23777545907] - <a href="#delete-id" class="deletebtn" contactid='2'> delete contact</a></li>         <li>mambe nanje [+23777545907] - <a  href="#delete-id" class="deletebtn" contactid='3'> delete contact</a></li> </ul> </div> The above code creates an html form that provides input fields to insert new address book entries. It also displays a button to make it appear via javascript functions. It also creates a notification div and goes to display the contact list with delete button on each entry. With the above code, the application will now look like this:   /* CSS Document */   body {               background-color: #000000; } #Layer1 {               margin:auto;               width:484px;               height:308px;               z-index:1; } #add-contact-form{               color:#FF9900;               font-weight:bold;               font-family:Verdana, Arial, Helvetica, sans-serif;               background-color:#333333;               margin-top:5px;               padding:10px; } #add-contact-btn{               background-color:#FF9900;               font-weight:bold;               font-family:Verdana, Arial, Helvetica, sans-serif;               border:1px solid #666666;               color:#000;               text-decoration:none;               padding:2px;               font-weight:bold; } #save-contact-btn{               background-color:#FF9900;               font-weight:bold;               font-family:Verdana, Arial, Helvetica, sans-serif;               border:1px solid #666666;               color:#000;               text-decoration:none;               padding:2px;               font-weight:bold; } #cancel-btn{               background-color:#FF9900;               font-weight:bold;               font-family:Verdana, Arial, Helvetica, sans-serif;               border:1px solid #666666;               color:#000;               text-decoration:none;               padding:2px;               font-weight:bold; } h1{               color:#FFFFFF;               font-family:Arial, Helvetica, sans-serif; } #list-title{               color:#FFFFFF;               font-weight:bold;               font-size:14px;               font-family:Arial, Helvetica, sans-serif;               margin-top:10px; } #contacts-lists{               color:#FF6600;               font-weight:bold;               font-family:Verdana, Arial, Helvetica, sans-serif;               font-size:12px; } #contacts-lists a{               background-color:#FF9900;               text-decoration:none;                            padding:2px;               color:#000;               margin-bottom:2px; } #contacts-lists li{               list-style:none;               border-bottom:1px dashed #666666;               margin-bottom:10px;               padding-bottom:5px; } #notice{               width:400px;               margin:auto;               background-color:#FFFF99;               border:1px solid #FFCC99;               font-weight:bold;               font-family:verdana;               margin-top:10px;               padding:4px; } The CSS code styles the HTML above and it ends up looking like this: Now that we have our html and css perfectly working, we need to setup the database and the PHP server codes that will handle the AJAX requests from the jquery functions. Create a MySQL database and executing the following SQL code will create the contacts table. This is the only table this application needs. CREATE TABLE `contacts` ( `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY , `names` VARCHAR( 200 ) NOT NULL , `phone` VARCHAR( 100 ) NOT NULL ); Let's analyse the php codes. Remember, this code will be located in addressbook.php. The database connection code # FileName="Connection_php_mysql.htm" # Type="MYSQL" # HTTP="true" //configure the database paramaters $hostname_packpub_addressbook = "YOUR-DATABASE-HOST"; $database_packpub_addressbook = "YOUR-DATABASE-NAME"; $username_packpub_addressbook = "YOUR-DATABASE-USERNAME"; $password_packpub_addressbook = "YOUR-DATABASE-PASSWORD"; //connect to the database server $packpub_addressbook = mysql_pconnect($hostname_packpub_addressbook, $username_packpub_addressbook,  $password_packpub_addressbook) or trigger_error(mysql_error(),E_USER_ERROR); //selete the database mysql_select_db($database_packpub_addressbook); the above code sets the parameters required for the database connection, then establishes a connection to the server and selects your database. The PHP codes will then contain functions SAVECONTACTS, DELETECONTACTS, GETCONTACTS. These functions will do exactly as their name implies. Save the contact from AJAX call to the database, delete contact via AJAX request or get the contacts. The functions are as show below: //function to save new contact /** * @param <string> $name //name of the contact * @param <string> $phone //the telephone number of the contact */ function saveContact($name,$phone){               $sql="INSERT INTO contacts (names , phone ) VALUES ('".$name."','".$phone."');";               $result=mysql_query($sql)or die(mysql_error()); } //lets write a function to delete contact /** * @param <int> id //the contact id in database we wish to delete */ function deleteContact($id){               $sql="DELETE FROM contacts where id=".$id;               $result=mysql_query($sql); }   //lets get all the contacts function getContacts(){               //execute the sql to get all the contacts in db               $sql="SELECT * FROM contacts";               $result=mysql_query($sql);               //store the contacts in an array of objects               $contacts=array();               while($record=mysql_fetch_object($result)){                             array_push($contacts,$record);               }               //return the contacts               return $contacts; } The codes above creates the functions but the functions are not called till the following code executes: //lets handle the Ajax calls now $action=$_POST['action']; //the action for now is either add or delete if($action=="add"){               //get the post variables for the new contact               $name=$_POST['name'];               $phone=$_POST['phone'];               //save the new contact               saveContact($name,$phone);               $output['msg']=$name." has been saved successfully";               //reload the contacts               $output['contacts']=getContacts();               echo json_encode($output); }else if($action=="delete"){               //collect the id we wish to delete               $id=$_POST['id'];               //delete the contact with that id               deleteContact($id);               $output['msg']="one entry has been deleted successfully";               //reload the contacts               $output['contacts']=getContacts();               echo json_encode($output); }else{               $output['contacts']=getContacts();               $output['msg']="list of all contacts";               echo json_encode($output); } The above code is the heart of the addressbook.php codes. It gets the action from post variables sent via AJAX call in addressbook.js file, interprets the action and executes the appropriate function for either add, delete or nothing which will just get the list of contacts. json_encode() function is used to encode the data in to Javascript Object Notation format; it will be easily interpreted by the javascript codes.
Read more
  • 0
  • 0
  • 12718

article-image-writing-xml-data-file-system-ssis
Packt
29 Dec 2009
5 min read
Save for later

Writing XML data to the File System with SSIS

Packt
29 Dec 2009
5 min read
Integrating data into applications or reports is one of the most important, expensive and exacting activities in building enterprise data warehousing applications. SQL Server Integration Services which first appeared in MS SQL Server 2005 and continued into MS SQL Server 2008 provides a one-stop solution to the ETL Process. The ETL Process consists of extracting data from a data source, transforming the data so that it can get in cleanly into the destination followed by loading the transformed data to the destination source. Enterprise data can be of very different kinds ranging from flat files to data stored in relational databases. Recently storing data in XML data sources has become common as exchanging data in XML format has many advantages. Creating a stored procedure that retrieves XML In the present example it is assumed that you have a copy of the Northwind database. You could use any other database. We will be creating a stored procedure that selects a number of columns from a table in the database using the For XML clause. The Select query would return an XML fragment from the database. The next listing shows the stored procedure. Create procedure [dbo].[tst]asSelect FirstName, LastName, City from Employeesfor XML raw The result of executing this stored procedure[exec tst] in the SQL Server Management Studio is shown in the next listing. <row FirstName="Nancy" LastName="Davolio" City="Seattle"/><row FirstName="Andrew" LastName="Fuller" City="Tacoma"/><row FirstName="Janet" LastName="Leverling" City="Kirkland"/><row FirstName="Margaret" LastName="Peacock" City="Redmond"/><row FirstName="Steven" LastName="Buchanan" City="London"/><row FirstName="Michael" LastName="Suyama" City="London"/><row FirstName="Robert" LastName="King" City="London"/><row FirstName="Laura" LastName="Callahan" City="Seattle"/><row FirstName="Anne" LastName="Dodsworth" City="London"/> Creating a package in BIDS or Visual Studio 2008 You require SQL Server 2008 installed to create a package. In either of these programs, File | New | Projects... brings up New Project window where you can choose to create a business intelligence project with a Integration Services Project template. You create a project by providing a name for the project. Herein it was named XMLquery. After providing a name and closing the New Project window the XMLquery project will be created with a default package with the file name, Package.dtsx. The file name can be renamed by right clicking the file and clicking OK to the window that pops up regarding the change you are making. Herein the package was named XmlToFile.dtsx. The following figure shows the project created by the program. When the program is created the package designer surface will be open with a tabbed page where you can configure control flow tasks, Data Flow Tasks and Event handlers. You can also look at the package explorer to review the contents of the package. The reader may benefit by reviewing my book, Beginners Guide to SQL Server Integration Services, on this site. Adding and configuring a ExecuteSQL task Using an ExecuteSQL Task component the stored procedure on the SQL Server 2008 will be executed. The result of this will be stored in a package variable which will then be retrieved using a Script Task. In this section you will be configuring the ExecuteSQL Task. Drag and drop a Execute SQL Task under Control Flow items in the Toolbox on to the Control Flow tabbed page of the package designer. Double click Execute SQL Task component in the package designer to display the Execute SQL Task Editor as shown. It is a good practice to provide a description to the task. Herein it is, "Retrieving XML from the SQL Server" as shown. The result set can be of any of those shown in the next figure. Since the information that is retrieved running the stored procedure is XML, XML choice is the correct one to choose. The stored procedure is on the SQL Server 2008 and therefore a connection needs to be established. Leave the connection type as OLE DB and click on an empty area along the line item, Connection. This brings up the Configure OLE DB Connection Manager window where you can select an existing connection, or create a new connection. Hit the New... button to bring the Connection Manager window as shown. The window comes up with just the right provider [Native OLE DBSQL Server Native Client10.0]. You can choose the server by browsing with the drop-down handler as shown. In the present case the Windows Authentication is used with the current user as the database administrator. If this information is correct you can browse the database objects to choose the correct database which hosts the stored procedure as shown. You may also test the connection with the Test Connection button. You must close the Connection Manager window which will bring you back to the Configure OLE DB Connection Manager window which now displays the connection you just made. To proceed further you need to close this window as well. This will bring in the connection information into the Execute SQL Task editor window. The type of input is chosen to be a direct input (the others are file and variable). The query to be executed is the stored procedure, tst described early in the tutorial. The BypassPrepare is set to false. The General page of the Execute SQL Task editor is as shown here.
Read more
  • 0
  • 0
  • 12655
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-loading-submitting-and-validating-forms-using-ext-js-4
Packt
31 Aug 2012
25 min read
Save for later

Working with forms using Ext JS 4

Packt
31 Aug 2012
25 min read
Ext JS 4 is Sencha’s latest JavaScript framework for developing cross-platform web applications. Built upon web standards, Ext JS provides a comprehensive library of user interface widgets and data manipulation classes to turbo-charge your application’s development. In this article, written by Stuart Ashworth and Andrew Duncan, the authors of Ext JS 4 Web Application Development Cookbook, we will cover: Constructing a complex form layout Populating your form with data Submitting your form's data Validating form fields with VTypes Creating custom VTypes Uploading files to the server Handling exceptions and callbacks This article introduces forms in Ext JS 4. We begin by creating a support ticket form in the first recipe. To get the most out of this article you should be aware that this form is used by a number of recipes throughout the article. Instead of focusing on how to configure specific fields, we demonstrate more generic tasks for working with forms. Specifically, these are populating forms, submitting forms, performing client-side validation, and handling callbacks/exceptions. Constructing a complex form layout In the previous releases of Ext JS, complicated form layouts were quite difficult to achieve. This was due to the nature of the FormLayout, which was required to display labels and error messages correctly, and how it had to be combined with other nested layouts. Ext JS 4 takes a different approach and utilizes the Ext.form.Labelable mixin, which allows form fields to be decorated with labels and error messages without requiring a specific layout to be applied to the container. This means we can combine all of the layout types the framework has to offer without having to overnest components in order to satisfy the form field's layout requirements. We will describe how to create a complex form using multiple nested layouts and demonstrate how easy it is to get a form to look exactly as we want. Our example will take the structure of a Support Ticket Request form and, once we are finished, it will look like the following screenshot: (Move the mouse over the image to enlarge.) How to do it... We start this recipe by creating a simple form panel that will contain all of the layout containers and their fields: var formPanel = Ext.create('Ext.form.Panel', { title: 'Support Ticket Request', width: 650, height: 500, renderTo: Ext.getBody(), style: 'margin: 50px', items: [] }); Now, we will create our first set of fields— the FirstName and LastName fields. These will be wrapped in an Ext.container.Container component, which is given an hbox layout so our fields appear next to each other on one line: var formPanel = Ext.create('Ext.form.Panel', { title: 'Support Ticket Request', width: 650, height: 500, renderTo: Ext.getBody(), style: 'margin: 50px', items: [{ xtype: 'container', layout: 'hbox', items: [{ xtype: 'textfield', fieldLabel: 'First Name', name: 'FirstName', labelAlign: 'top', cls: 'field-margin', flex: 1 }, { xtype: 'textfield', fieldLabel: 'Last Name', name: 'LastName', labelAlign: 'top', cls: 'field-margin', flex: 1 }] }] }); We have added a CSS class (field-margin) to each field, to provide some spacing between them. We can now add this style inside <style> tags in the head of our document: <style type="text/css"> .field-margin { margin: 10px; }</style> Next, we create a container with a column layout to position our e-mail address and telephone number fields. We nest our telephone number fields in an Ext.form.FieldContainer class , which we will discuss later in the recipe: items: [ ... { xtype: 'container', layout: 'column', items: [{ xtype: 'textfield', fieldLabel: 'Email Address', name: 'EmailAddress', labelAlign: 'top', cls: 'field-margin', columnWidth: 0.6 }, { xtype: 'fieldcontainer', layout: 'hbox', fieldLabel: 'Tel. Number', labelAlign: 'top', cls: 'field-margin', columnWidth: 0.4, items: [{ xtype: 'textfield', name: 'TelNumberCode', style: 'margin-right: 5px;', flex: 2 }, { xtype: 'textfield', name: 'TelNumber', flex: 4 }] }] } ... ] The text area and checkbox group are created and laid out in a similar way to the previous sets, by using an hbox layout: items: [ ... { xtype: 'container', layout: 'hbox', items: [{ xtype: 'textarea', fieldLabel: 'Request Details', name: 'RequestDetails', labelAlign: 'top', cls: 'field-margin', height: 250, flex: 2 }, { xtype: 'checkboxgroup', name: 'RequestType', fieldLabel: 'Request Type', labelAlign: 'top', columns: 1, cls: 'field-margin', vertical: true, items: [{ boxLabel: 'Type 1', name: 'type1', inputValue: '1' }, { boxLabel: 'Type 2', name: 'type2', inputValue: '2' }, { boxLabel: 'Type 3', name: 'type3', inputValue: '3' }, { boxLabel: 'Type 4', name: 'type4', inputValue: '4' }, { boxLabel: 'Type 5', name: 'type5', inputValue: '5' }, { boxLabel: 'Type 6', name: 'type6', inputValue: '6' }], flex: 1 }] } ... ] Finally, we add the last field, which is a file upload field, to allow users to provide attachments: items: [ ... { xtype: 'filefield', cls: 'field-margin', fieldLabel: 'Attachment', width: 300 } ... ] How it works... All Ext JS form fields inherit from the base Ext.Component class and so can be included in all of the framework's layouts. For this reason, we can include form fields as children of containers with layouts (such as hbox and column layouts) and their position and size will be calculated accordingly. Upgrade Tip: Ext JS 4 does not have a form layout meaning a level of nesting can be removed and the form fields' labels will still be displayed correctly by just specifying the fieldLabel config. The Ext.form.FieldContainer class used in step 4 is a special component that allows us to combine multiple fields into a single container, which also implements the Ext.form. Labelable mixin . This allows the container itself to display its own label that applies to all of its child fields while also giving us the opportunity to configure a layout for its child components. Populating your form with data After creating our beautifully crafted and user-friendly form we will inevitably need to populate it with some data so users can edit it. Ext JS makes this easy, and this recipe will demonstrate four simple ways of achieving it. We will start by explaining how to populate the form on a field-by-field basis, then move on to ways of populating the entire form at once. We will also cover populating it from a simple object, a Model instance, and a remote server call. Getting ready We will be using the form created in this article's first recipe as our base for this section, and many of the subsequent recipes in this article, so please look back if you are not familiar with it. All the code we will write in this recipe should be placed under the definition of this form panel. You will also require a working web server for the There's More example, which loads data from an external file. How to do it... We'll demonstrate how to populate an entire form's fields in bulk and also how to populate them individually. Populating individual fields We will start by grabbing a reference to the first name field using the items property's get method. The items property contains an instance of Ext.util. MixedCollection, which holds a reference to each of the container's child components. We use its get method to retrieve the component at the specified index: var firstNameField = formPanel.items.get(0).items.get(0); Next, we use the setValue method of the field to populate it: firstNameField.setValue('Joe'); Populating the entire form To populate the entire form, we must create a data object containing a value for each field. The property names of this object will be mapped to the corresponding form field by the field's name property. For example, the FirstName property of our requestData object will be mapped to a form field with a name property value of FirstName: var requestData = { FirstName: 'Joe', LastName: 'Bloggs', EmailAddress: 'info@swarmonline.com', TelNumberCode: '0777', TelNumber: '7777777', RequestDetails: 'This is some Request Detail body text', RequestType: { type1: true, type2: false, type3: false, type4: true, type5: true, type6: false } }; We then call the setValues method of the form panel's Ext.form.Basic instance, accessed through the getForm method, passing it our requestData variable: formPanel.getForm().setValues(requestData); How it works... Each field contains a method called setValue , which updates the field's value with the value that is passed in. We can see this in action in the first part of the How to do it section. A form panel contains an internal instance of the Ext.form.Basic class (accessible through the getForm method ), which provides all of the validation, submission, loading, and general field management that is required by a form. This class contains a setValues method , which can be used to populate all of the fields that are managed by the basic form class. This method works by simply iterating through all of the fields it contains and calling their respective setValue methods. This method accepts either a simple data object, as in our example, whose properties are mapped to fields based on the field's name property. Alternatively, an array of objects can be supplied, containing id and value properties, with the id mapping to the field's name property. The following code snippet demonstrates this usage: formPanel.getForm().setValues([{id: 'FirstName', value: 'Joe'}]);   There's more... Further to the two previously discussed methods there are two others that we will demonstrate here. Populating a form from a Model instance Being able to populate a form directly from a Model instance is extremely useful and is very simple to achieve. This allows us to easily translate our data structures into a form without having to manually map it to each field. We initially define a Model and create an instance of it (using the data object we used earlier in the recipe): Ext.define('Request', { extend: 'Ext.data.Model', fields: [ 'FirstName', 'LastName', 'EmailAddress', 'TelNumberCode', 'TelNumber', 'RequestDetails', 'RequestType' ] }); var requestModel = Ext.create('Request', requestData); Following this we call the loadRecord method of the Ext.form.Basic class and supply the Model instance as its only parameter. This will populate the form, mapping each Model field to its corresponding form field based on the name: formPanel.getForm().loadRecord(requestModel); Populating a form directly from the server It is also possible to load a form's data directly from the server through an AJAX call. Firstly, we define a JSON file, containing our request data, which will be loaded by the form: { "success": true, "data": { "FirstName": "Joe", "LastName": "Bloggs", "EmailAddress": "info@swarmonline.com", "TelNumberCode": "0777", "TelNumber": "7777777", "RequestDetails": "This is some Request Detail body text", "RequestType": { "type1": true, "type2": false, "type3": false, "type4": true, "type5": true, "type6": false } } } Notice the format of the data: we must provide a success property to indicate that the load was successful and put our form data inside a data property. Next we use the basic form's load method and provide it with a configuration object containing a url property pointing to our JSON file: formPanel.getForm().load({ url: 'requestDetails.json' }); This method automatically performs an AJAX request to the specified URL and populates the form's fields with the data that was retrieved. This is all that is required to successfully load the JSON data into the form. The basic form's load method accepts similar configuration options to a regular AJAX request Submitting your form's data Having taken care of populating the form it's now time to look at sending newly added or edited data back to the server. As with form population you'll learn just how easy this is with the Ext JS framework. There are two parts to this example. Firstly, we will submit data using the options of the basic form that wraps the form panel. The second example will demonstrate binding the form to a Model and saving our data. Getting ready We will be using the form created in the first recipe as our base for this section, so refer to the Constructing a complex form layout recipe, if you are not familiar with it. How to do it... Add a function to submit the form: var submitForm = function(){ formPanel.getForm().submit({ url: 'submit.php' }); }; Add a button to the form that calls the submitForm function: var formPanel = Ext.create('Ext.form.Panel', { ... buttons: [{ text: 'Submit Form', handler: submitForm }], items: [ ... ] }); How it works... As we learned in the previous recipe, a form panel contains an internal instance of the Ext.form.Basic class (accessible through the getForm method). The submit method in Ext.form.Basic is a shortcut to the Ext.form.action.Submit action. This class handles the form submission for us. All we are required to do is provide it with a URL and it will handle the rest. It's also possible to define the URL in the configuration for the Ext.form.Panel.. Before submitting, it must first gather the data from the form. The Ext.form.Basic class contains a getValues method, which is used to gather the data values for each form field. It does this by iterating through all fields in the form making a call to their respective getValue methods. There's more... The previous recipe demonstrated how to populate the form from a Model instance. Here we will take it a step further and use the same Model instance to submit the form as well. Submitting a form from a Model instance Extend the Model with a proxy and load the data into the form: xt.define('Request', { extend: 'Ext.data.Model', fields: ['FirstName', 'LastName', 'EmailAddress', 'TelNumberCode', 'TelNumber', 'RequestDetails', 'RequestType'], proxy: { type: 'ajax', api: { create: 'addTicketRequest.php', update: 'updateTicketRequest.php' }, reader: { type: 'json' } } }); var requestModel = Ext.create('Request', { FirstName: 'Joe', LastName: 'Bloggs', EmailAddress: 'info@swarmonline.com' }); formPanel.getForm().loadRecord(requestModel); Change the submitForm function to get the Model instance, update the record with the form data, and save the record to the server: var submitForm = function(){ var record = formPanel.getForm().getRecord(); formPanel.getForm().updateRecord(record); record.save(); }; Validating form fields with VTypes In addition to form fields' built-in validation (such as allowBlank and minLength), we can apply more advanced and more extensible validation by using VTypes. A VType (contained in the Ext.form.field.VTypes singleton) can be applied to a field and its validation logic will be executed as part of the field's periodic validation routine. A VType encapsulates a validation function, an error message (which will be displayed if the validation fails), and a regular expression mask to prevent any undesired characters from being entered into the field. This recipe will explain how to apply a VType to the e-mail address field in our example form, so that only properly formatted e-mail addresses are deemed valid and an error will be displayed if it doesn't conform to this pattern. How to do it... We will start by defining our form and its fields. We will be using our example form that was created in the first recipe of this article as our base. Now that we have a form we can add the vtype configuration option to our e-mail address field: { xtype: 'textfield', fieldLabel: 'Email Address', name: 'EmailAddress', labelAlign: 'top', cls: 'field-margin', columnWidth: 0.6, vtype: 'email' } That is all we have to do to add e-mail address validation to a field. We can see the results in the following screenshot, with an incorrectly formatted e-mail address on the left and a valid one on the right: How it works... When a field is validated it runs through various checks. When a VType is defined the associated validation routine is executed and will flag the field invalid or not . As previously mentioned, each VType has an error message coupled with it, which is displayed if it is found to be invalid, and a mask expression which prevents unwanted characters being entered. Unfortunately, only one VType can be applied to a field and so, if multiple checks are required, a custom hybrid may need to be created. See the next recipe for details on how to do this. There's more... Along with the e-mail VType, the framework provides three other VTypes that can be applied straight out of the box. These are: alpha: this restricts the field to only alphabetic characters alphnum: this VType allows only alphanumeric characters url: this ensures that the value is a valid URL Creating custom VTypes We have seen in the previous recipe how to use VTypes to apply more advanced validation to our form's fields. The built-in VTypes provided by the framework are excellent but we will often want to create custom implementations to impose more complex and domain specific validation to a field. We will walkthrough creating a custom VType to be applied to our telephone number field to ensure it is in the format that a telephone number should be. Although our telephone number field is split into two (the first field for the area code and the second for the rest of the number), for this example we will combine them so our VType is more comprehensive. For this example, we will be validating a very simple, strict telephone number format of "0777-777-7777". How to do it... We start by defining our VType's structure. This consists of a simple object literal with three properties. A function called telNumber and two strings called telNumberText (which will contain the error message text) and telNumberMask (which holds a regex to restrict the characters allowed to be entered into the field) respectively. var telNumberVType = { telNumber: function(val, field){ // function executed when field is validated // return true when field's value (val) is valid return true; }, telNumberText: 'Your Telephone Number must only include numbers and hyphens.', telNumberMask: /[d-]/ }; Next we define the regular expression that we will use to validate the field's value. We add this as a variable to the telNumber function: telNumber: function(val, field){ var telNumberRegex = /^d{4}-d{3}-d{4}$/; return true; } Once this has been done we can add the logic to this telNumber function that will decide whether the field's current value is valid. This is a simple call to the regular expression string's test method, which returns true if the value matches or false if it doesn't: telNumber: function(val, field){ var telNumberRegex = /^d{4}-d{3}-d{4}$/; return telNumberRegex.test(val); } The final step to defining our new VType is to apply it to the Ext.form.field. VTypes singleton, which is where all of the VTypes are located and where our field's validation routine will go to get its definition: Ext.apply(Ext.form.field.VTypes, telNumberVType); Now that our VType has been defined and registered with the framework, we can apply it to the field by using the vtype configuration option. The result can be seen in the following screenshot: { xtype: 'textfield', name: 'TelNumber', flex: 4, vtype: 'telNumber' } How it works... A VType consists of three parts: The validity checking function The validation error text A keystroke filtering mask (optional) VTypes rely heavily on naming conventions so they can be executed dynamically within a field's validation routine. This means that each of these three parts must follow the standard convention. The validation function's name will become the name used to reference the VType and form the prefix for the other two properties. In our example, this name was telNumber, which can be seen referencing the VType in Step 5. The error text property is then named with the VType's name prefixing the word Text (that is, telNumberText ). Similarly, the filtering mask is the VType's name followed by the word Mask (that is, telNumberMask ). The final step to create our VType is to merge it into the Ext.form.field.VTypes singleton allowing it to be accessed dynamically during validation. The Ext.apply function does this by merging the VType's three properties into the Ext.form.field.VTypes class instance. When the field is validated, and a vtype is defined, the VType's validation function is executed with the current value of the field and a reference to the field itself being passed in. If the function returns true then all is well and the routine moves on. However, if it evaluates to false the VType's Text property is retrieved and pushed onto the errors array. This message is then displayed to the user as our screenshot shown earlier. This process can be seen in the code snippet as follows, taken directly from the framework: if (vtype) { if(!vtypes[vtype](value, me)){ errors.push(me.vtypeText || vtypes[vtype +'Text']); } } There's more... It is often necessary to validate fields based on the values of other fields as well as their own. We will demonstrate this by creating a simple VType for validating that a confirm password field's value matches the value entered in an initial password field. We start by creating our VType structure as we did before: Ext.apply(Ext.form.field.VTypes, { password: function(val, field){ return false; }, passwordText: 'Your Passwords do not match.' }); We then complete the validation logic. We use the field's up method to get a reference to its parent form. Using that reference, we get the values for all of the form's fields by using the getValues method : password: function(val, field){ var parentForm = field.up('form'); // get parent form // get the form's values var formValues = parentForm.getValues(); return false; } The next step is to get the first password field's value. We do this by using an extra property ( firstPasswordFieldName) that we will specify when we add our VType to the confirm password field. This property will contain the name of the initial password field (in this example Password ). We can then compare the confirm password's value with the retrieved value and return the outcome: password: function(val, field){ var parentForm = field.up('form'); // get parent form // get the form's values var formValues = parentForm.getValues(); // get the value from the configured 'First Password' field var firstPasswordValue = formValues[field.firstPasswordFieldName]; // return true if they match return val === firstPasswordValue; } The VType is added to the confirm password field in exactly the same way as before but we must include the extra firstPasswordFieldName option to link the fields together: { xtype: 'textfield', fieldLabel: 'Confirm Password', name: 'ConfirmPassword', labelAlign: 'top', cls: 'field-margin', flex: 1, vtype: 'password', firstPasswordFieldName: 'Password' } Uploading files to the server Uploading files is very straightforward with Ext JS 4. This recipe will demonstrate how to create a basic file upload form and send the data to your server: Getting Ready This recipe requires the use of a web server for accepting the uploaded file. A PHP file is provided to handle the file upload; however, you can integrate this Ext JS code with any server-side technology you wish. How to do it... Create a simple form panel. Ext.create('Ext.form.Panel', { title: 'Document Upload', width: 400, bodyPadding: 10, renderTo: Ext.getBody(), style: 'margin: 50px', items: [], buttons: [] }); In the panel's items collection add a file field: Ext.create('Ext.form.Panel', { ... items: [{ xtype: 'filefield', name: 'document', fieldLabel: 'Document', msgTarget: 'side', allowBlank: false, anchor: '100%' }], buttons: [] }); Add a button to the panel's buttons collection to handle the form submission: Ext.create('Ext.form.Panel', { ... buttons: [{ text: 'Upload Document', handler: function(){ var form = this.up('form').getForm(); if (form.isValid()) { form.submit({ url: 'upload.php', waitMsg: 'Uploading...' }); } } }] }); How it works... Your server-side code should handle these form submissions in the same way they would handle a regular HTML file upload form. You should not have to do anything special to make your server-side code compatible with Ext JS. The example works by defining an Ext.form.field.File ( xtype: 'filefield' ), which takes care of the styling and the button for selecting local files. The form submission handler works the same way as any other form submission; however, behind the scenes the framework tweaks how the form is submitted to the server. A form with a file upload field is not submitted using an XMLHttpRequest object—instead the framework creates and submits a temporary hidden <form> element whose target is referenced to a temporary hidden <iframe>. The request header's Content-Type is set to multipart/form. When the upload is finished and the server has responded, the temporary form and <iframe> are removed. A fake XMLHttpRequest object is then created containing a responseText property (populated from the contents of the <iframe> ) to ensure that event handlers and callbacks work as if we were submitting the form using AJAX. If your server is responding to the client with JSON, you must ensure that the response Content-Type header is text/html. There's more... It's possible to customize your Ext.form.field.File. Some useful config options are highlighted as follows: buttonOnly: Boolean Setting buttonOnly: true removes the visible text field from the file field. buttonText: String If you wish to change the text in the button from the default of "Browse…" it's possible to do so by setting the buttonText config option. buttonConfig: Object Changing the entire configuration of the button is done by defining a standard Ext.button. Button config object in the buttonConfig option. Anything defined in the buttonText config option will be ignored if you use this. Handling exception and callbacks This recipe demonstrates how to handle callbacks when loading and submitting forms. This is particularly useful for two reasons: You may wish to carry our further processing once the form has been submitted (for example, display a thank you message to the user) In the unfortunate event when the submission fails, it's good to be ready and inform the user something has gone wrong and perhaps perform extra processing The recipe shows you what to do in the following circumstances: The server responds informing you the submission was successful The server responds with an unusual status code (for example, 404 , 500 , and so on) The server responds informing you the submission was unsuccessful (for example, there was a problem processing the data) The form is unable to load data because the server has sent an empty data property The form is unable to submit data because the framework has deemed the values in the form to be invalid Getting ready The following recipe requires you to submit values to a server. An example submit.php file has been provided. However, please ensure you have a web server for serving this file. How to do it... Start by creating a simple form panel: var formPanel = Ext.create('Ext.form.Panel', { title: 'Form', width: 300, bodyPadding: 10, renderTo: Ext.getBody(), style: 'margin: 50px', items: [], buttons: [] }); Add a field to the form and set allowBlank to false: var formPanel = Ext.create('Ext.form.Panel', { ... items: [{ xtype: 'textfield', fieldLabel: 'Text field', name: 'field', allowBlank: false }], buttons: [] }); Add a button to handle the forms submission and add success and failure handlers to the submit method's only parameter: var formPanel = Ext.create('Ext.form.Panel', { ... buttons: [{ text: 'Submit', handler: function(){ formPanel.getForm().submit({ url: 'submit.php', success: function(form, action){ Ext.Msg.alert('Success', action.result.message); }, failure: function(form, action){ if (action.failureType === Ext.form.action.Action. CLIENT_INVALID) { Ext.Msg.alert('CLIENT_INVALID', 'Something has been missed. Please check and try again.'); } if (action.failureType === Ext.form.action.Action. CONNECT_FAILURE) { Ext.Msg.alert('CONNECT_FAILURE', 'Status: ' + action.response.status + ': ' + action.response.statusText); } if (action.failureType === Ext.form.action.Action. SERVER_INVALID) { Ext.Msg.alert('SERVER_INVALID', action.result. message); } } }); } }] }); When you run the code, watch for the different failureTypes or the success callback: CLIENT_INVALID is fired when there is no value in the text field. The success callback is fired when the server returns true in the success property. Switch the response in submit.php file and watch for SERVER_INVALID failureType. This is fired when the success property is set to false. Finally, edit url: 'submit.php' to url: 'unknown.php' and CONNECT_FAILURE will be fired. How it works... The Ext.form.action.Submit and Ext.form.action.Load classes both have a failure and success function. One of these two functions will be called depending on the outcome of the action. The success callback is called when the action is successful and the success property is true. The failure callback , on the other hand, can be extended to look for specific reasons why the failure occurred (for example, there was an internal server error, the form did not pass client-side validation, and so on). This is done by looking at the failureType property of the action parameter. Ext.form.action.Action has four failureType static properties: CLIENT_INVALID, SERVER_INVALID, CONNECT_FAILURE, and LOAD_FAILURE, which can be used to compare with what has been returned by the server. There's more... A number of additional options are described as follows: Handling form population failures The Ext.form.action.Action.LOAD_FAILURE static property can be used in the failure callback when loading data into your form. The LOAD_FAILURE is returned as the action parameter's failureType when the success property is false or the data property contains no fields. The following code shows how this failure type can be caught inside the failure callback function: failure: function(form, action){ ... if(action.failureType == Ext.form.action.Action.LOAD_FAILURE){ Ext.Msg.alert('LOAD_FAILURE', action.result.message); } ... } An alternative to CLIENT_INVALID The isValid method in Ext.form.Basic is an alternative method for handling client-side validation before the form is submitted. isValid will return true when client-side validation passes: handler: function(){ if (formPanel.getForm().isValid()) { formPanel.getForm().submit({ url: 'submit.php' }); } }   Further resources on this subject: Ext JS 4: Working with the Grid Component [Article] Ext JS 4: Working with Tree and Form Components [Article] Infinispan Data Grid: Infinispan and JBoss AS 7 [Article]
Read more
  • 0
  • 0
  • 12644

article-image-building-do-list-ajax
Packt
08 Nov 2013
8 min read
Save for later

Building a To-do List with Ajax

Packt
08 Nov 2013
8 min read
(For more resources related to this topic, see here.) Creating and migrating our to-do list's database As you know, migrations are very helpful to control development steps. We'll use migrations in this article. To create our first migration, type the following command: php artisan migrate:make create_todos_table --table=todos --create When you run this command, Artisan will generate a migration to generate a database table named todos. Now we should edit the migration file for the necessary database table columns. When you open the folder migration in app/database/ with a file manager, you will see the migration file under it. Let's open and edit the file as follows: <?php use IlluminateDatabaseMigrationsMigration; class CreateTodosTable extends Migration { /** * Run the migrations. * * @return void */ public function up() { Schema::create('todos', function(Blueprint $table){ $table->create(); $table->increments("id"); $table->string("title", 255); $table->enum('status', array('0', '1'))->default('0'); $table->timestamps(); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::drop("todos"); } } To build a simple TO-DO list, we need five columns: The id column will store ID numbers of to-do tasks The title column will store a to-do task's title The status column will store statuses of the tasks The created_at and updated_at columns will store the created and updated dates of tasks If you write $table->timestamps() in the migration file, Laravel's migration class automatically creates created_at and updated_at columns. As you know, to apply migrations, we should run the following command: php artisan migrate After the command is run, if you check your database, you will see that our todos table and columns have been created. Now we need to write our model. Creating a todos model To create a model, you should open the app/models/ directory with your file manager. Create a file named Todo.php under the directory and write the following code: <?php class Todo extends Eloquent { protected $table = 'todos'; } Let's examine the Todo.php file. As you see, our Todo class extends an Eloquent model, which is the ORM (Object Relational Mapper) database class of Laravel. The protected $table = 'todos'; code tells Eloquent about our model's table name. If we don't set the table variable, Eloquent accepts the plural version of the lower case model name as table name. So this isn't required technically. Now, our application needs a template file, so let's create it. Creating the template Laravel uses a template engine that is called blade for static and application template files. Laravel calls the template files from the app/views/ directory, so we need to create our first template under this directory. Create a file with the name index.blade.php. The file contains the following code: <html> <head> <title>To-do List Application</title> <link rel="stylesheet" href="assets/css/style.css"> <!--[if lt IE 9]><script src = "//html5shim.googlecode.com/svn/trunk/html5.js"> </script><![endif]--> </head> <body> <div class="container"> <section id="data_section" class="todo"> <ul class="todo-controls"> <li><img src = "/assets/img/add.png" width="14px" onClick="show_form('add_task');" /></li> </ul> <ul id="task_list" class="todo-list"> @foreach($todos as $todo) @if($todo->status) <li id="{{$todo->id}}" class="done"> <a href="#" class="toggle"></a> <span id="span_{{$todo->id}}">{ {$todo->title}}</span> <a href="#" onClick="delete_task('{{$todo->id}}');" class="icon-delete">Delete</a> <a href="#" onClick="edit_task('{{$todo->id}}', '{{$todo->title}}');" class="icon-edit">Edit</a></li> @else <li id="{{$todo->id}}"><a href="#" onClick="task_done('{{$todo->id}}');" class="toggle"></a> <span id="span_{ {$todo->id}}">{{$todo->title}}</span> <a href="#" onClick="delete_task('{ {$todo->id}}');" class= "icon-delete">Delete</a> <a href="#" onClick="edit_task('{ {$todo->id}}','{{$todo->title}}');" class="icon-edit">Edit</a></li> @endif @endforeach </ul> </section> <section id="form_section"> <form id="add_task" class="todo" style="display:none"> <input id="task_title" type="text" name="title" placeholder="Enter a task name" value=""/> <button name="submit">Add Task</button> </form> <form id="edit_task" class="todo" style="display:none"> <input id="edit_task_id" type="hidden" value="" /> <input id="edit_task_title" type="text" name="title" value="" /> <button name="submit">Edit Task</button> </form> </section> </div> <script src = "http://code.jquery.com/ jquery-latest.min.js"type="text/javascript"></script> <script src = "assets/js/todo.js" type="text/javascript"></script> </body> </html> The preceding code may be difficult to understand if you're writing a blade template for the first time, so we'll try to examine it. You see a foreach loop in the file. This statement loops our todo records. We will provide you with more knowledge about it when we are creating our controller in this article. If and else statements are used for separating finished and waiting tasks. We use if and else statements for styling the tasks. We need one more template file for appending new records to the task list on the fly. Create a file with the name ajaxData.blade.php under app/views/ folder. The file contains the following code: @foreach($todos as $todo) <li id="{{$todo->id}}"><a href="#" onClick="task_done('{{$todo- >id}}');" class="toggle"></a> <span id="span_{{$todo >id}}">{{$todo->title}}</span> <a href="#" onClick="delete_task('{{$todo->id}}');" class="icon delete">Delete</a> <a href="#" onClick="edit_task('{{$todo >id}}','{{$todo->title}}');" class="icon-edit">Edit</a></li> @endforeach Also, you see the /assets/ directory in the source path of static files. When you look at the app/views directory, there is no directory named assets. Laravel separates the system and public files. Public accessible files stay under your public folder in root. So you should create a directory under your public folder for asset files. We recommend working with these types of organized folders for developing tidy and easy-to-read code. Finally you see that we are calling jQuery from its main website. We also recommend this way for getting the latest, stable jQuery in your application. You can style your application as you wish, hence we'll not examine styling code here. We are putting our style.css files under /public/assets/css/. For performing Ajax requests, we need JavaScript coding. This code posts our add_task and edit_task forms and updates them when our tasks are completed. Let's create a JavaScript file with the name todo.js in /public/assets/js/. The files contain the following code: function task_done(id){ $.get("/done/"+id, function(data) { if(data=="OK"){ $("#"+id).addClass("done"); } }); } function delete_task(id){ $.get("/delete/"+id, function(data) { if(data=="OK"){ var target = $("#"+id); target.hide('slow', function(){ target.remove(); }); } }); } function show_form(form_id){ $("form").hide(); $('#'+form_id).show("slow"); } function edit_task(id,title){ $("#edit_task_id").val(id); $("#edit_task_title").val(title); show_form('edit_task'); } $('#add_task').submit(function(event) { /* stop form from submitting normally */ event.preventDefault(); var title = $('#task_title').val(); if(title){ //ajax post the form $.post("/add", {title: title}).done(function(data) { $('#add_task').hide("slow"); $("#task_list").append(data); }); } else{ alert("Please give a title to task"); } }); $('#edit_task').submit(function() { /* stop form from submitting normally */ event.preventDefault(); var task_id = $('#edit_task_id').val(); var title = $('#edit_task_title').val(); var current_title = $("#span_"+task_id).text(); var new_title = current_title.replace(current_title, title); if(title){ //ajax post the form $.post("/update/"+task_id, {title: title}).done(function(data) { $('#edit_task').hide("slow"); $("#span_"+task_id).text(new_title); }); } else{ alert("Please give a title to task"); } }); Let's examine the JavaScript file.
Read more
  • 0
  • 0
  • 12641

article-image-using-nodejs-dependencies-nwjs
Max Gfeller
19 Nov 2015
6 min read
Save for later

Using Node.js dependencies in NW.js

Max Gfeller
19 Nov 2015
6 min read
NW.js (formerly known as node-webkit) is a framework that makes it possible to write multi-platform desktop applications using the technologies you already know well: HTML, CSS and JavaScript. It bundles a Chromium and a Node (or io.js) runtime and provides additional APIs to implement native-like features like real menu bars or desktop notifications. A big advantage of having a Node/io.js runtime is to be able to make use of all the modules that are available for node developers. We can categorize three different types of modules that we can use. Internal modules Node comes with a solid set of internal modules like fs or http. It is built on the UNIX philosophy of doing only one thing and doing it very well. Therefore you won't find too much functionality in node core. The following modules are shipped with node: assert: used for writing unit tests buffer: raw memory allocation used for dealing with binary data child_process: spawn and use child processes cluster: take advatage of multi-core systems crypto: cryptographic functions dgram: use datagram sockets - dns: perform DNS lookups domain: handle multiple different IO operations as a single group events: provides the EventEmitter fs: operations on the file system http: perform http queries and create http servers https: perform https queries and create https servers net: asynchronous network wrapper os: basic operating-system related utility functions path: handle and transform file paths punycode: deal with punycode domain names querystring: deal with query strings stream: abstract interface implemented by various objects in Node timers: setTimeout, setInterval etc. tls: encrypted stream communication url: URL resolution and parsing util: various utility functions vm: sandbox to run Node code in zlib: bindings to Gzip/Gunzip, Deflate/Inflate, and DeflateRaw/InflateRaw Those are documented on the official Node API documentation and can all be used within NW.js. Please take care that Chromium already defines a crypto global, so when using the crypto module in the webkit context you should assign it to a variable like crypt rather than crypto: var crypt = require('crypto'); The following example shows how we would read a file and use its contents using Node's modules: var fs = require('fs'); fs.readFile(__dirname + '/file.txt', function (error, contents) {   if (error) returnconsole.error(error);   console.log(contents); }); 3rd party JavaScript modules Soon after Node itself was started, Isaac Schlueter, who was friend of creator Ryan Dahl, started working on a package manager for Node itself. While Nodes's popularity reached new highs, a lot of packages got added to the npm registry and it soon became the fastest growing package registry. To the time of this writing there are over 169'000 packages on the registry and nearly two billion downloads each month. The npm registry is now also slowly evolving from being "only" a package manager for Node into a package manager for all things Javascript. Most of these packages can also be used inside NW.js applications. Your application's dependencies are being defined in your package.json file in the dependencies(or devDependencies) section: {   "name": "my-cool-application",   "version": "1.0.0",   "dependencies": {     "lodash": "^3.1.2"   },   "devDependencies": {     "uglify-js": "^2.4.3"   } } In the dependencies field you find all the modules that are required to run your application while in the devDependencies field only the modules required while developing the application are found. Installing a module is fairly easy and the best way to do this is with the npm install command: npm install lodash --save The install command directly downloads the latest version into your node_modules/ folder. The --save flag means that this dependency should also directly be written into your package.json file. You can also define a specific version to download by using following notation: npm install lodash@1.* or even npm install lodash@1.1 How does node's require() work? You need to deal with two different contexts in NW.js and it is really important to always know which context you are currently in as it changes the way the require() function works. When you load a moule using Node's require() function, then this module runs in the Node context. That means you have the same globals as you would have in a pure Node script but you can't access the globals from the browser, e.g. document or window. If you write Javascript code inside of a <script> tag in your html, or when you include a script inside your HTML using <script src="">, then this code runs in the webkit context. There you have access to all browsers globals. In the webkit context The require() function is a module loading system defined by the CommonJS Modules 1.0 standard and directly implemented in node core. To offer the same smooth experience you get a modified require() method that works in webkit, too. Whenever you want to include a certain module from the webkit context, e.g. directly from an inline script in your index.html file, you need to specify the path directly from the root of your project. Let's assume the following folder structure: - app/   - app.js   - foo.js   - bar.js   - index.html And you want to include the app/app.js file directly in your index.html you need to include it like this: <script type="text/javascript">   var app = require('./app/app.js'); </script> If you need to use a module from npm then you can simply require() it and NW.js will figure out where the corresponding node_modules/ folder is located. In the node context In node when you use relative paths it will always try to locate this module relative to the file you are requiring it from. If we take the example from above then we could require the foo.js module from app.js like this: var foo = require('./foo'); About the Author Max Gfeller is a passionate web developer and JavaScript enthusiast. He is making awesome things at Cylon and can be found on Twitter @mgefeller.
Read more
  • 0
  • 0
  • 12597

article-image-mvvm-and-data-binding
Packt
28 Dec 2016
9 min read
Save for later

MVVM and Data Binding

Packt
28 Dec 2016
9 min read
In this article by Steven F. Daniel, author of the book Mastering Xamarin UI Development, you will learn how to build stunning, maintainable, cross-platform mobile application user interfaces with the power of Xamarin. In this article, we will cover the following topics: Understanding the MVVM pattern architecture Implement the MVVM ViewModels within the app (For more resources related to this topic, see here.) Understanding the MVVM pattern architecture In this section we will be taking a look at the MVVM pattern architecture and the communication between the components that make up the architecture. The MVVM design pattern is designed to control the separation between the user interfaces (Views), the ViewModels that contain the actual binding to the Model, and the models that contain the actual structure of the entities representing information stored on a database or from a web service. The following screenshot shows the communication between each of the components contained within the MVVM design pattern architecture: The MVVM design pattern is divided into three main areas, as you can see from the preceding screenshot and these are explained in the following table: MVVM type Description Model The Model is basically a representation of business related entities used by an application, and is responsible for fetching data from either a database, or web service, and then de-serialized to the entities contained within the Model. View The View component of the MVVM model basically represents the actual screens that make up the application, along with any custom control components, and control elements, such as buttons, labels, and text fields. The Views contained within the MVVM pattern are platform-specific and are dependent on the platform APIs that are used to render the information that is contained within the application's user interface. ViewModel The ViewModel essentially controls, and manipulates the Views by acting as their main data context. The ViewModel contains a series of properties that are bound to the information contained within each Model, and those properties are then bound to each of the Views to represent this information within the user interface. ViewModels can also contain command objects that provide action-based events that can trigger the execution of event methods that occur within the View. For example, when the user taps on a toolbar item, or a button. ViewModels generally implement the INotifyPropertyChanged interface. Such a class fires a PropertyChanged event whenever one of their properties change. The data binding mechanism in Xamarin.Forms attaches a handler to this PropertyChanged event so it can be notified when a property changes and keep the target updated with the new value. Now that you have a good understanding of the components that are contained within MVVM design pattern architecture, we can begin to create our entity models and update our user interface files. In Xamarin.Forms, the term View is used to describe form controls, such as buttons and labels, and uses the term Page to describe the user interface or screen. Whereas, in MVVM, Views are used to describe the user interface, or screen. Implementing the MVVM ViewModels within your app In this section, we will begin by setting up the basic structure for our TrackMyWalks solution to include the folder that will be used to represent our ViewModels. Let's take a look at how we can achieve this, by following the steps: Launch the Xamarin Studio application and ensure that the TrackMyWalks solution is loaded within the Xamarin Studio IDE. Next, create a new folder within the TrackMyWalks PCL project, called ViewModels as shown in the following screenshot: Creating the WalkBaseViewModel for the TrackMyWalks app In this section, we will begin by creating a base MVVM ViewModel that will be used by each of our ViewModels when we create these, and then the Views (pages) will implement those ViewModels and use them as their BindingContext. Let's take a look at how we can achieve this, by following the steps: Create an empty class within the ViewModels folder, shown in the following screenshot: Next, choose the Empty Class option located within the General section, and enter in WalkBaseViewModel for the name of the new class file to create, as shown in the following screenshot: Next, click on the New button to allow the wizard to proceed and create the new empty class file, as shown in the preceding screenshot. Up until this point, all we have done is create our WalkBaseViewModel class file. This abstract class will act as the base ViewModel class that will contain the basic functionality that each of our ViewModels will inherit from. As we start to build the base class, you will see that it contains a couple of members and it will implement the INotifyPropertyChangedInterface,. As we progress through this article, we will build to this class, which will be used by the TrackMyWalks application. To proceed with creating the base ViewModel class, perform the following step as shown: Ensure that the WalkBaseViewModel.cs file is displayed within the code editor, and enter in the following code snippet: // // WalkBaseViewModel.cs // TrackMyWalks Base ViewModel // // Created by Steven F. Daniel on 22/08/2016. // Copyright © 2016 GENIESOFT STUDIOS. All rights reserved. // using System.ComponentModel; using System.Runtime.CompilerServices; namespace TrackMyWalks.ViewModels { public abstract class WalkBaseViewModel : INotifyPropertyChanged { protected WalkBaseViewModel() { } public event PropertyChangedEventHandler PropertyChanged; protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = null) { var handler = PropertyChanged; if (handler != null) { handler(this, new PropertyChangedEventArgs(propertyName)); } } } } In the preceding code snippet, we begin by creating a new abstract class for our WalkBaseViewModel that implements from the INotifyPropertyChanged interface class, that allows the View or page to be notified whenever properties contained within the ViewModel have changed. Next, we declare a variable PropertyChanged that inherits from the PropertyChangedEventHandler that will be used to indicate whenever properties on the object have changed. Finally, within the OnPropertyChanged method, this will be called when it has determined that a change has occurred on a property within the ViewModel from a child class. The INotifyPropertyChanged interface is used to notify clients, typically binding clients, when the value of a property has changed. Implementing the WalksPageViewModel In the previous section, we built our base class ViewModel for our TrackMyWalks application, and this will act as the main class that will allow our View or pages to be notified whenever changes to properties within the ViewModel have been changed. In this section, we will need to begin building the ViewModel for our WalksPage,. This model will be used to store the WalkEntries, which will later be used and displayed within the ListView on the WalksPage content page. Let's take a look at how we can achieve this, by following the steps: First, create a new class file within the ViewModels folder called WalksPageViewModel, as you did in the previous section, entitled Creating the WalkBaseViewModel located within this article. Next, ensure that the WalksPageViewModel.cs file is displayed within the code editor, and enter in the following code snippet: // // WalksPageViewModel.cs // TrackMyWalks ViewModels // // Created by Steven F. Daniel on 22/08/2016. // Copyright © 2016 GENIESOFT STUDIOS. All rights reserved. // using System.Collections.ObjectModel; using TrackMyWalks.Models; namespace TrackMyWalks.ViewModels { public class WalksPageViewModel : WalkBaseViewModel { ObservableCollection<WalkEntries> _walkEntries; public ObservableCollection<WalkEntries> walkEntries { get { return _walkEntries; } set { _walkEntries = value; OnPropertyChanged(); } } In the above code snippet, we begin by ensuring that our ViewModel inherits from the WalkBaseViewModel class. Next, we create an ObservableCollection variable _walkEntries which is very useful when you want to know when the collection has changed, and an event is triggered that will tell the user what entries have been added or removed from the WalkEntries model. In our next step, we create the ObservableCollection constructor WalkEntries, that is defined within the System.Collections.ObjectModel class, and accepts a List parameter containing our WalkEntries model. The WalkEntries property will be used to bind to the ItemSource property of the ListView within the WalksMainPage. Finally, we define the getter (get) and setter (set) methods that will return and set the content of our _walkEntries when it has been determined when a property has been modified or not. Next, locate the WalksPageViewModel class constructor, and enter the following highlighted code sections:         public WalksPageViewModel() { walkEntries = new ObservableCollection<WalkEntries>() { new WalkEntries { Title = "10 Mile Brook Trail, Margaret River", Notes = "The 10 Mile Brook Trail starts in the Rotary Park near Old Kate, a preserved steam " + "engine at the northern edge of Margaret River. ", Latitude = -33.9727604, Longitude = 115.0861599, Kilometers = 7.5, Distance = 0, Difficulty = "Medium", ImageUrl = "http://trailswa.com.au/media/cache/media/images/trails/_mid/" + "FullSizeRender1_600_480_c1.jpg" }, new WalkEntries { Title = "Ancient Empire Walk, Valley of the Giants", Notes = "The Ancient Empire is a 450 metre walk trail that takes you around and through some of " + "the giant tingle trees including the most popular of the gnarled veterans, known as " + "Grandma Tingle.", Latitude = -34.9749188, Longitude = 117.3560796, Kilometers = 450, Distance = 0, Difficulty = "Hard", ImageUrl = "http://trailswa.com.au/media/cache/media/images/trails/_mid/" + "Ancient_Empire_534_480_c1.jpg" }, }; } } } In the preceding code snippet, we began by creating a new ObservableCollection for our walkEntries method and then added each of the walk list items that we would like to store within our model. As each item is added, the ObservableCollection, constructor is called, and the setter (set) method is invoked to add the item, and then the INotifyPropertyChanged event will be triggered to notify that a change has occurred. Summary In this article, you learned about the MVVM pattern architecture; we also implemented the MVVM ViewModels within the app. Additionally, we created and implemented the WalkBaseViewModel for the TrackMyWalks application. Resources for Article: Further resources on this subject: A cross-platform solution with Xamarin.Forms and MVVM architecture [article] Building a Gallery Application [article] Heads up to MvvmCross [article]
Read more
  • 0
  • 0
  • 12553
article-image-enabling-your-new-theme-magento
Packt
18 Dec 2013
3 min read
Save for later

Enabling your new theme in Magento

Packt
18 Dec 2013
3 min read
(For more resources related to this topic, see here.) After your new theme is in place, you can enable it in Magento. Log in to your Magento store's administration panel. Once you have logged in, navigate to System | Configuration, as shown in the following screenshot: From there, select the global configuration scope (labeled Default Config in the following screenshot) you want to apply your new theme to, from the Current Configuration Scope dropdown in the top left of your screen: Once this has loaded, navigate to the Design tab under GENERAL in the left-hand column and expand the Themes block in the right-hand column, as shown in the following screenshot: From here, you can tell Magento to use your new theme. The values given here correspond to the name you gave to the directories when creating your theme. The example uses responsive as the value here, as shown in the following screenshot: Click on the Save Config button at the top right of your screen to save the changes. Next, check that your new theme has been activated. Remember the styles.css file you added in the skin/frontend/default/responsive/css directory? The presence of that file is telling Magento to load your new theme's CSS file instead of the default styles.css file for Magento from the default package, so your store now has none of the original CSS styling it. As such, you should see the following screenshot when you attempt to view the frontend of your Magento store: Overwriting the default Magento templates Noticed the name of your Magento theme appearing next to the logo in the header of your store? You can overwrite the default header.phtml that's causing it by copying the contents of app/design/frontend/base/default/template/page/html/header.phtml into app/design/frontend/default/responsive/template/ page/html/header.phtml. Open the file and find the following lines: <?php if ($this->getIsHomePage()):?> <h1 class="logo"><strong><?php echo $this->getLogoAlt() ?></strong><a href="<?php echo $this->getUrl('') ?>" title= "<?php echo $this->getLogoAlt() ?>" class="logo"><img src = "<?php echo $this->getLogoSrc() ?>" alt="<?php echo $this->getLogoAlt() ?>" /></a></h1> <?php else:?> <a href="<?php echo $this->getUrl('') ?>" title="<?php echo $this->getLogoAlt() ?>" class="logo"><strong><?php echo $this->getLogoAlt() ?></strong><img src = "<?php echo $this->getLogoSrc() ?>" alt="<?php echo $this->getLogoAlt() ?>" /></a> <?php endif?> Replace them with these lines: <a href="<?php echo $this->getUrl('') ?>" title="<?php echo $this- >getLogoAlt() ?>" class="logo"><img src = "<?php echo $this-> getLogoSrc() ?>" alt="<?php echo $this->getLogoAlt() ?>" /></a> Now if you save that file (and upload it to your server, if needed), you can see that the logo now looks tidier, as shown in the following screenshot: That's it! Your basic responsive Magento theme is up and running. Summary Hopefully after reading this article you will get a better understanding of how to enable your new theme in Magento. Resources for Article: Further resources on this subject: Magento : Payment and shipping method [Article] Categories and Attributes in Magento: Part 2 [Article] Magento: Exploring Themes [Article]
Read more
  • 0
  • 0
  • 12541

article-image-css-grids-rwd
Packt
24 Aug 2015
12 min read
Save for later

CSS Grids for RWD

Packt
24 Aug 2015
12 min read
In this article by the author, Ricardo Zea, of the book, Mastering Responsive Web Design, we're going to learn how to create a custom CSS grid. Responsive Web Design (RWD) has introduced a new layer of work for everyone building responsive websites and apps. When we have to test our work on different devices and in different dimensions, wherever the content breaks, we need to add a breakpoint and test again. (For more resources related to this topic, see here.) This can happen many, many times. So, building a website or app will take a bit longer than it used to. To make things a little more interesting, as web designers and developers, we need to be mindful of how the content is laid out at different dimensions and how a grid can help us structure the content to different layouts. Now that we have mentioned grids, have you ever asked yourself, "what do we use a grid for anyway?" To borrow a few terms from the design industry and answer that question, we use a grid to allow the content to have rhythm, proportion, and balance. The objective is that those who use our websites/apps will have a more pleasant experience with our content, since it will be easier to scan (rhythm), easier to read (proportion) and organized (balance). In order to speed up the design and build processes while keeping all the content properly formatted in different dimensions, many authors and companies have created CSS frameworks and CSS grids that contain not only a grid but also many other features and styles than can be leveraged by using a simple class name. As time goes by and browsers start supporting more and more CSS3 properties, such as Flexbox, it'll become easier to work with layouts. This will render the grids inside CSS frameworks almost unnecessary. Let's see what CSS grids are all about and how they can help us with RWD. In this article, we're going to learn how to create a custom CSS grid. Creating a custom CSS grid Since we're mastering RWD, we have the luxury of creating our own CSS grid. However, we need to work smart, not hard. Let's lay out our CSS grid requirements: It should have 12 columns. It should be 1200px wide to account for 1280px screens. It should be fluid, with relative units (percentages) for the columns and gutters. It should use the mobile-first approach. It should use the SCSS syntax. It should be reusable for other projects. It should be simple to use and understand. It should be easily scalable. Here's what our 1200 pixel wide and 12-column width 20px grid looks like: The left and right padding in black are 10px each. We'll convert those 10px into percentages at the end of this process. Doing the math We're going to use the RWD magic formula:  (target ÷ context) x 100 = result %. Our context is going to be 1200px. So let's convert one column:  80 ÷ 1200 x 100 = 6.67%. For two columns, we have to account for the gutter that is 20px. In other words, we can't say that two columns are exactly 160px. That's not entirely correct. Two columns are: 80px + 20px + 80px = 180px. Let's now convert two columns:  180 ÷ 1200 x 100 = 15%. For three columns, we now have to account for two gutters: 80px + 20px + 80px + 20px + 80px = 280px. Let's now convert three columns:  280 ÷ 1200 x 100 = 23.33%. Can you see the pattern now? Every time we add a column, all that we need to do is add 100 to the value. This value accounts for the gutters too! Check the screenshot of the grid we saw moments ago, you can see the values of the columns increment by 100. So, all the equations are as follows: 1 column: 80 ÷ 1200 x 100 = 6.67% 2 columns: 180 ÷ 1200 x 100 = 15% 3 columns: 280 ÷ 1200 x 100 = 23.33% 4 columns: 380 ÷ 1200 x 100 = 31.67% 5 columns: 480 ÷ 1200 x 100 = 40% 6 columns: 580 ÷ 1200 x 100 = 48.33% 7 columns: 680 ÷ 1200 x 100 = 56.67% 8 columns: 780 ÷ 1200 x 100 = 65% 9 columns: 880 ÷ 1200 x 100 = 73.33% 10 columns: 980 ÷ 1200 x 100 = 81.67% 11 columns:1080 ÷ 1200 x 100 = 90% 12 columns:1180 ÷ 1200 x 100 = 98.33% Let's create the SCSS for the 12-column grid: //Grid 12 Columns .grid { &-1 { width:6.67%; } &-2 { width:15%; } &-3 { width:23.33%; } &-4 { width:31.67%; } &-5 { width:40%; } &-6 { width:48.33%; } &-7 { width:56.67%; } &-8 { width:65%; } &-9 { width:73.33%; } &-10 { width:81.67%; } &-11 { width:90%; } &-12 { width:98.33%; } } Using hyphens (-) to separate words allows for easier selection of the terms when editing the code. Adding the UTF-8 character set directive and a Credits section Don't forget to include the UTF-8 encoding directive at the top of the file to let browsers know the character set we're using. Let's spruce up our code by adding a Credits section at the top. The code is as follows: @charset "UTF-8"; /* Custom Fluid & Responsive Grid System Structure: Mobile-first (min-width) Syntax: SCSS Grid: Float-based Created by: Your Name Date: MM/DD/YY */ //Grid 12 Columns .grid { &-1 { width:6.67%; } &-2 { width:15%; } &-3 { width:23.33%; } &-4 { width:31.67%; } &-5 { width:40%; } &-6 { width:48.33%; } &-7 { width:56.67%; } &-8 { width:65%; } &-9 { width:73.33%; } &-10 { width:81.67%; } &-11 { width:90%; } &-12 { width:98.33%; } } Notice the Credits are commented with CSS style comments: /* */. These types of comments, depending on the way we compile our SCSS files, don't get stripped out. This way, the Credits are always visible so that others know who authored the file. This may or may not work for teams. Also, the impact on file size of having the Credits display is imperceptible, if any. Including the box-sizing property and the mobile-first mixin Including the box-sizing property allows the browser's box model to account for the padding inside the containers; this means the padding gets subtracted rather than added, thus maintaining the defined width(s). Since the structure of our custom CSS grid is going to be mobile-first, we need to include the mixin that will handle this aspect: @charset "UTF-8"; /* Custom Fluid & Responsive Grid System Structure: Mobile-first (min-width) Syntax: SCSS Grid: Float-based Created by: Your Name Date: MM/DD/YY */ *, *:before, *:after { box-sizing: border-box; } //Moble-first Media Queries Mixin @mixin forLargeScreens($width) { @media (min-width: $width/16+em) { @content } } //Grid 12 Columns .grid { &-1 { width:6.67%; } &-2 { width:15%; } &-3 { width:23.33%; } &-4 { width:31.67%; } &-5 { width:40%; } &-6 { width:48.33%; } &-7 { width:56.67%; } &-8 { width:65%; } &-9 { width:73.33%; } &-10 { width:81.67%; } &-11 { width:90%; } &-12 { width:98.33%; } } The main container and converting 10px to percentage value Since we're using the mobile-first approach, our main container is going to be 100% wide by default; but we're also going to give it a maximum width of 1200px since the requirement is to create a grid of that size. We're also going to convert 10px into a percentage value, so using the RWD magic formula: 10 ÷ 1200 x 100 = 0.83%. However, as we've seen before, 10px, or in this case 0.83%, is not enough padding and makes the content appear too close to the edge of the main container. So we're going to increase the padding to 20px:  20 ÷ 1200 x 100 = 1.67%. We're also going to horizontally center the main container with margin:auto;. There's no need to declare zero values to the top and bottom margins to center horizontally. In other words, margin: 0 auto; isn't necessary. Just declaring margin: auto; is enough. Let's include these values now: @charset "UTF-8"; /* Custom Fluid & Responsive Grid System Structure: Mobile-first (min-width) Syntax: SCSS Grid: Float-based Created by: Your Name Date: MM/DD/YY */ *, *:before, *:after { box-sizing: border-box; } //Moble-first Media Queries Mixin @mixin forLargeScreens($width) { @media (min-width: $width/16+em) { @content } } //Main Container .container-12 { width: 100%; //Change this value to ANYTHING you want, no need to edit anything else. max-width: 1200px; padding: 0 1.67%; margin: auto; } //Grid 12 Columns .grid { &-1 { width:6.67%; } &-2 { width:15%; } &-3 { width:23.33%; } &-4 { width:31.67%; } &-5 { width:40%; } &-6 { width:48.33%; } &-7 { width:56.67%; } &-8 { width:65%; } &-9 { width:73.33%; } &-10 { width:81.67%; } &-11 { width:90%; } &-12 { width:98.33%; } } In the padding property, it's the same if we type 0.83% or .83%. We can omit the zero. It's always a good practice to keep our code as streamlined as possible. This is the same principle as when we use hexadecimal shorthand values: #3336699 is the same as #369. Making it mobile-first On small screens, all the columns are going to be 100% wide. Since we're working with a single column layout, we don't use gutters; this means we don't have to declare margins, at least yet. At 640px, the grid will kick in and assign corresponding percentages to each column, so we're going to include the columns in a 40em (640px) media query and float them to the left. At this point, we need gutters. Thus, we declare the margin with .83% to the left and right padding. I chose 40em (640px) arbitrarily and only as a starting point. Remember to create content-based breakpoints rather than device-based ones. The code is as follows: @charset "UTF-8"; /* Custom Fluid & Responsive Grid System Structure: Mobile-first (min-width) Syntax: SCSS Grid: Float-based Created by: Your Name Date: MM/DD/YY */ *, *:before, *:after { box-sizing: border-box; } //Moble-first Media Queries Mixin @mixin forLargeScreens($width) { @media (min-width: $width/16+em) { @content } } //Main Container .container-12 { width: 100%; //Change this value to ANYTHING you want, no need to edit anything else. max-width: 1200px; padding: 0 1.67%; margin: auto; } //Grid .grid { //Global Properties - Mobile-first &-1, &-2, &-3, &-4, &-5, &-6, &-7, &-8, &-9, &-10, &-11, &-12 { width: 100%; } @include forLargeScreens(640) { //Totally arbitrary width, it's only a starting point. //Global Properties - Large screens &-1, &-2, &-3, &-4, &-5, &-6, &-7, &-8, &-9, &-10, &-11, &-12 { float: left; margin: 0 .83%; } //Grid 12 Columns .grid { &-1 { width:6.67%; } &-2 { width:15%; } &-3 { width:23.33%; } &-4 { width:31.67%; } &-5 { width:40%; } &-6 { width:48.33%; } &-7 { width:56.67%; } &-8 { width:65%; } &-9 { width:73.33%; } &-10 { width:81.67%; } &-11 { width:90%; } &-12 { width:98.33%; } } } Adding the row and float clearing rules If we use rows in our HTML structure or add the class .clear to a tag, we can declare all the float clearing values in a single nested rule with the :before and :after pseudo-elements. It's the same thing to use single or double colons when declaring pseudo-elements. The double colon is a CSS3 syntax and the single colon is a CSS2.1 syntax. The idea was to be able to differentiate them at a glance so a developer could tell which CSS version they were written on. However, IE8 and below do not support the double-colon syntax. The float clearing technique is an adaptation of David Walsh's CSS snippet (http://davidwalsh.name/css-clear-fix). We're also adding a rule for the rows with a bottom margin of 10px to separate them from each other, while removing that margin from the last row to avoid creating unwanted extra spacing at the bottom. Finally, we add the clearing rule for legacy IEs. Let's include these rules now: @charset "UTF-8"; /* Custom Fluid & Responsive Grid System Structure: Mobile-first (min-width) Syntax: SCSS Grid: Float-based Created by: Your Name Date: MM/DD/YY */ *, *:before, *:after { box-sizing: border-box; } //Moble-first Media Queries Mixin @mixin forLargeScreens($width) { @media (min-width: $width/16+em) { @content } } //Main Container .container-12 { width: 100%; //Change this value to ANYTHING you want, no need to edit anything else. max-width: 1200px; padding: 0 1.67%; margin: auto; } //Grid .grid { //Global Properties - Mobile-first &-1, &-2, &-3, &-4, &-5, &-6, &-7, &-8, &-9, &-10, &-11, &-12 { width: 100%; } @include forLargeScreens(640) { //Totally arbitrary width, it's only a starting point. //Global Properties - Large screens &-1, &-2, &-3, &-4, &-5, &-6, &-7, &-8, &-9, &-10, &-11, &-12 { float: left; margin: 0 .83%; } //Grid 12 Columns .grid { &-1 { width:6.67%; } &-2 { width:15%; } &-3 { width:23.33%; } &-4 { width:31.67%; } &-5 { width:40%; } &-6 { width:48.33%; } &-7 { width:56.67%; } &-8 { width:65%; } &-9 { width:73.33%; } &-10 { width:81.67%; } &-11 { width:90%; } &-12 { width:98.33%; } } } //Clear Floated Elements - http://davidwalsh.name/css-clear-fix .clear, .row { &:before, &:after { content: ''; display: table; } &:after { clear: both; } } //Use rows to nest containers .row { margin-bottom: 10px; &:last-of-type { margin-bottom: 0; } } //Legacy IE .clear { zoom: 1; } Let's recap our CSS grid requirements: 12 columns: Starting from .grid-1 to .grid-12. 1200px wide to account for 1280px screens: The .container-12 container has max-width: 1200px; Fluid and relative units (percentages) for the columns and gutters: The percentages go from 6.67% to 98.33%. Mobile-first: We added the mobile-first mixin (using min-width) and nested the grid inside of it. The SCSS syntax: The whole file is Sass-based. Reusable: As long as we're using 12 columns and we're using the mobile-first approach, we can use this CSS grid multiple times. Simple to use and understand: The class names are very straightforward. The .grid-6 grid is used for an element that spans 6 columns, .grid-7 is used for an element that spans 7 columns, and so on. Easily scalable: If we want to use 980px instead of 1200px, all we need to do is change the value in the .container-12 max-width property. Since all the elements are using relative units (percentages), everything will adapt proportionally to the new width—to any width for that matter. Pretty sweet if you ask me. Summary A lot to digest here, eh? Creating our custom CSS with the traditional floats technique was a matter of identifying the pattern where the addition of a new column was a matter of increasing the value by 100. Now, we can create a 12-column grid at any width we want. Resources for Article: Further resources on this subject: Role of AngularJS[article] Managing Images[article] Angular Zen [article]
Read more
  • 0
  • 0
  • 12511

Packt
21 Nov 2013
7 min read
Save for later

Zurb Foundation – an Overview

Packt
21 Nov 2013
7 min read
(For more resources related to this topic, see here.) Most importantly, you can apply your creativity to make the design your own. Foundation gives you the tools you need for this. Then it gets out of the way and your site becomes your own. Especially when you advance to using the Foundation's SASS variables, functions and mixins, you have the ability to make your site your own unique creation. Foundation's grid system The foundation (pun intended) of Zurb Foundation is its grid system—rows and columns—much like a spread sheet, a blank sheet of graph paper, or tables, similar to what we used to use for HTML layout. Think of it as the canvas upon which you design your website. Each cell is a content area that can be merged with other cells, beside or below it, to make larger content areas. A default installation of Foundation will be based on twelve cells in a row. A column is comprised of one or more individual cells. Lay out a website Let's put Foundation's grid system to work in an example. We'll build a basic website with a two part header, a two part content area, a sidebar, and a three part footer area. With the simple techniques we demonstrate here, you can craft mostly any layout you want. Here is the mobile view Foundation works best when you design for small devices first, so here is what we want our small device (mobile) view to look like: This is the layout we want on mobile or small devices. But we've labeled the content areas with a title that describes where we want them on a regular desktop. By doing this, we are thinking ahead and creating a view ready for the desktop as well. Here is the desktop view Since a desktop display is typically wider than a mobile display, we have more horizontal space and things that had to be presented vertically on the mobile view can be displayed horizontally on the desktop view. Here is how we want our regular desktop or laptop to display the same content areas: These are not necessarily drawn to scale. It is the layout we are interested in. The two part header went from being one above the other in the mobile view to being side-by-side in the desktop view. The header on the top went left and the bottom header went right. All these make perfect sense. However, the sidebar shifted from being above the content area in the mobile view and shifted to its right in the mobile view. That's not natural when rendering HTML. Something must have happened! The content areas, left and right, stayed the same in both the views. And that's exactly what we wanted. The three part footer got rearranged. The center footer appears to have slid down between the left and right footers. That makes sense from a design perspective but it isn't natural from an HTML rendering perspective. Foundation provides the classes to easily make all this magic happen. Here is the code Unlike the early days of mobile design where a separate website was built for mobile devices, with Foundation you build your site once, and use classes to specify how it should look on both mobile and regular displays. Here is the HTML code that generates the two layouts: <header class="row"> <div class="large-6 column">Header Left</div> <div class="large-6 column">Header Right</div> </header> <main class="row"> <aside class="large-3 push-9 column">Sidebar Right</aside> <section class="large-9 pull-3 columns"> <article class="row"> <div class="small-9 column">Content Left</div> <div class="small-3 column">Content Right</div> </article> </section> </main> <footer class="row"> <div class="small-6 small-centered large-4 large-uncentered push-4 column">Footer Center</div> <div class="small-6 large-4 pull-4 column">Footer Left</div> <div class="small-6 large-4 column">Footer Right</div> </footer> That's all there is to it. Replace the text we used for labels with real content and you have a design that displays on mobile and regular displays in the layouts we've shown in this article. Toss in some widgets What we've shown above is just the core of the Foundation framework. As a toolkit, it also includes numerous CSS components and JavaScript plugins. Foundation includes styles for labels, lists, and data tables. It has several navigation components including Breadcrumbs, Pagination, Side Nav, and Sub Nav. You can add regular buttons, drop-down buttons, and button groups. You can make unique content areas with Block Grids, a special variation of the underlying grid. You can add images as thumbnails, put content into panels, present your video feed using the Flex Video component, easily add pricing tables, and represent progress bars. All these components only require CSS and are the easiest to integrate. By tossing in Foundation's JavaScript plugins, you have even more capabilities. Plugins include things like Alerts, Tooltips, and Dropdowns. These can be used to pop up messages in various ways. The Section plugin is very powerful when you want to organize your content into horizontal or vertical tabs, or when you want horizontal or vertical navigation. Like most components and plugins, it understands the mobile and regular desktop views and adapts accordingly. The Top Bar plugin is a favorite for many developers. It is a multi-level fly out menu plugin. Build your menu in HTML the way Top Bar expects. Set it up with the appropriate classes and it just works. Magellan and Joyride are two plugins that you can put to work to help show your viewers where they are on a page or to help them navigate to various sections on a page. Orbit is Foundation's slide presentation plugin. You often see sliders on the home page of websites these days. Clearing is similar to Orbit except that it displays thumbnails of the images in a presentation below the main display window. A viewer clicks on a thumbnail to display the full image. Reveal is a plugin that allows you to put a link anywhere on your page and when the viewer clicks on it, a box pops up extra content, which could even be an Orbit slider, is revealed. Interchange is one of the most recent additions to Foundation's plugin factory. With it you can selectively load images depending on the target environment. This lets you optimize bandwidth between your web server and your viewer's browser. Foundation also provides a great Forms plugin. On its own it is capable. With the additional Abide plugin you have a great deal of control over form layout and editing. Summary As you can see, Foundation is very capable of laying out web page for mobile devices and regular displays. One set of code, two very different looks. And that's just the beginning. Foundation's CSS components and JavaScript plugins can be placed on a web page in almost any content area. With these widgets you can have much more interaction with your viewers than you otherwise would. Put Foundation to work in your website today! Resources for Article: Further resources on this subject: Quick start – using Foundation 4 components for your first website [Article] Introduction to RWD frameworks [Article] Nesting, Extend, Placeholders, and Mixins [Article]
Read more
  • 0
  • 0
  • 12509
article-image-getting-started-cmis
Packt
19 Mar 2014
9 min read
Save for later

Getting Started with CMIS

Packt
19 Mar 2014
9 min read
(For more resources related to this topic, see here.) What is CMIS? The goal of CMIS is to provide a standard method for accessing content from different content repositories. Using CMIS service calls, it is possible to navigate through and create content in a repository. CMIS also includes a query language for searching both the metadata and full-text content stored that is stored in a repository. The CMIS standard defines the protocols and formats for the requests and responses of API service calls made to a repository. CMIS acts as a standard interface and protocol for accessing content repositories, something similar to how ANSI-SQL acts as a common-denominator language for interacting with different databases. The use of the CMIS API for accessing repositories brings with it a number of benefits. Perhaps chief among these is the fact that access to CMIS is language neutral. Any language that supports HTTP services can be used to access a CMIS-enabled repository. Client software can be written to use a single API and be deployed to run against multiple CMIS-compliant repositories. Alfresco and CMIS The original draft for CMIS 0.5 was written by EMC, IBM and Microsoft. Shortly after that draft, Alfresco and other vendors joined the CMIS standards group. Alfresco was an early CMIS adopter and offered an implementation of CMIS version 0.5 in 2008. In 2009, Alfresco began hosting an on-line preview of the CMIS standard. The server, accessible via the http://cmis.alfresco.com URL, still exists and implements the latest CMIS standard. As of this writing, that URL hosts a preview of CMIS 1.1 features. In mid-2010, just after the CMIS 1.0 standard was approved, Alfresco released CMIS in both the Alfresco Community and Enterprise editions. In 2012, with Alfresco version 4.0, Alfresco moved from a home grown CMIS runtime implementation to one that uses the Apache Chemistry OpenCMIS Server Framework. From that release, developers have been able to customize Alfresco using the OpenCMIS Java API. Overview of the CMIS Standard Next we discuss the details of the CMIS specification, particularly the domain model, the different services that it provides, and the supported protocol bindings. Domain model (Object model) Every content repository vendor has their own definition of a content or object model. Alfresco, for example, has rich content modeling capabilities, such as types and aspects that can inherit from other types and aspects, and properties that can be assigned attributes like data-type, multi-valued and required. But there are wide differences in the ways in which different vendors have implemented content modeling. In the Documentum ECM system, for example, the generic content type is called dm_document, while in Alfresco it is called cm:content. Another example is the concept of an aspect as used in Alfresco – many repositories do not support that idea. The CMIS Domain Model is an attempt by the CMIS Standardization group to define a framework generic enough that can describe content models and map to concepts used by many different repository vendors. The CMIS Domain Model defines a Repository as a container and an entry point to all content items, from now on called objects. All objects are classified by an Object Type , which describes a common set of Properties (like Type ID, Parent, and Display Name). There are five base types of objects: Document, Folder, Relationship, Policy , Item(available from CMIS 1.1), and these all inherit from Object Type. In addition to the five base object types there are also a number of property types that can be used when defining new properties for an Object type. These are shown in the figure: String, Boolean, Decimal, Integer, and DateTime. Besides these property types there are also the URI, Id, and HTML property types, not shown in the figure. Taking a closer look at each one of the base types, we can see that: Document almost always corresponds to a file, although it need not have any content (when you upload a file via, for example, the AtomPub binding the metadata is created with the first request and the content for the file is posted with the second request). Folder is a container for file-able objects such as folders and documents. Immediately after filing a folder or document into a folder, an implicit parent-child relationship is automatically created. The fileable property of the object type definition specifies whether an object is file-able or not. Relationship object defines a relationship between a target and source object. Objects can have multiple relationships with other objects. The support for relationship objects is optional. Policy is a way of defining administrative policies to manage objects. An object to which a policy may be applied is called a controllable object (controllablePolicy property has to be set to true). For example, a CMIS policy could be used to define a retention policy. A policy is opaque and has no meaning to the repository. It must be implemented and enforced in a repository-specific way. For example, rules might be used in Alfresco to enforce a policy. The support for policy objects is optional. Item (CMIS 1.1) object represents a generic type of a CMIS information asset. For example, this could be a user or group object. Item objects are not versionable and do not have content streams like documents, but they do have properties like all other CMIS objects. The support for item objects is optional. Additional object types can be defined in a repository as custom subtypes of the base types. The Legal Case type shown in the figure above is an example. CMIS services are provided for the discovery of object types that are defined in a repository. However, object type management services, such as the creation, modification, and deletion of an object type, are not covered by the CMIS standard. An object has one primary base object type, such as Document or Folder, which cannot be changed. An object can also have secondary object types applied to it (CMIS 1.1). A secondary type is a named class that may add extra properties to an object in addition to the properties defined by the object's primary base object-type (This is similar to the concept of aspects in Alfresco). Every CMIS object has an opaque and immutable Object Identity (ID), which is assigned by the repository when the object is created. In the case of Alfresco, a Node Reference is created which becomes the Object ID. The ID uniquely identifies an object within a repository regardless of the type of the object. All CMIS objects have a set of named, but not explicitly ordered, properties. Within an object, each property is uniquely identified by its Property ID. In addition, a document object can have a Content Stream, which is then used to hold the actual byte content from a file. A document can also have one or more Renditions, like a thumbnail, a different sized image, or an alternate representation of the content stream. Document or folder objects can have one Access Control List (ACL), which controls access to the document or folder. An ACL is made up of a list of Access Control Entries (ACEs). An ACE in turn represents one or more permissions being granted to a principal, such as a user, group, role, or something similar. All objects and properties are defined in the cmis name-space. From now on we will refer to the different objects and properties via their fully qualified name, for example cmis:document or cmis:name. Services The following CMIS services can access and manage CMIS objects in the repository: Repository Services: These are used to discover information about the repository, including repository ids (more than one repository could be managed by the endpoint). Since many features are optional, this provides a way to find out which are supported. CMIS 1.1 compliant repositories also support the creation of new types dynamically. Methods: getRepositories, getRepositoryInfo , getTypeChildren, getTypeDescendants , getTypeDefinition , createType (CMIS 1.1), updateType (CMIS 1.1), deleteType (CMIS 1.1) Navigation Services: These are used to navigate the folder hierarchy. Methods: getChildren, getDescendants, getFolderTree, getFolderParent, getObjectParents, getCheckedOutDocs. Object Services: These services provide ID-based CRUD (Create, Read, Update, Delete) operations. Methods: createDocument, createDocumentFromSource, createFolder, createRelationship, createPolicy, createItem (CMIS 1.1), getAllowableActions, getObject, getProperties, getObjectByPath, getContentStream, getRenditions, updateProperties, bulkUpdateProperties (CMIS 1.1), moveObject, deleteObject, deleteTree, setContentStream, appendContentStream (CMIS 1.1), deleteContentStream. Multi-filing Services: These services (optional) makes it possible to put an object to several folders (multi-filing) or outside the folder hierarchy (un-filing). This service is not used to create or delete objects. Methods: addObjectToFolder, removeObjectFromFolder. Discovery Services: These are used to search for query-able objects within the Repository (objects with property queryable set to true ). Methods: query , getContentChanges. Versioning Services: These are used to manage versioning of document objects, other objects are not versionable. Whether or not a document can be versioned is controlled by the versionable property in the Object type. Methods: checkOut, cancelCheckOut, checkIn, getObjectOfLatestVersion, getPropertiesOfLatestVersion, getAllVersions. Relationship Services: These (optional) are used to retrieve the relationships in which an object is participating. Methods: getObjectRelationships. Policy Services: These (optional) are used to apply or remove a policy object to an object which has the property controllablePolicy set to true. Methods: applyPolicy, removePolicy, getAppliedPolicies. ACL Services: This service is used to discover and manage the Access Control List (ACL) for an object, if the object has one. Methods: applyACL, and getACL Summary In this article, we introduced the CMIS standard and how it came about. Then we covered the CMIS domain model with its five base object types: document, folder, relationship, policy, and item (CMIS 1.1.). We also learned that the CMIS standard defines a number of services, such as navigation and discovery, which makes it possible to manipulate objects in a content management system repository. Resources for Article: Further resources on this subject: Content Delivery in Alfresco 3 [Article] Getting Started with the Alfresco Records Management Module [Article] Managing Content in Alfresco [Article]
Read more
  • 0
  • 0
  • 12502

article-image-alfresco-web-scrpits
Packt
06 Nov 2014
15 min read
Save for later

Alfresco Web Scrpits

Packt
06 Nov 2014
15 min read
In this article by Ramesh Chauhan, the author of Learning Alfresco Web Scripts, we will cover the following topics: Reasons to use web scripts Executing a web script from standalone Java program Invoking a web script from Alfresco Share DeclarativeWebScript versus AbstractWebScript (For more resources related to this topic, see here.) Reasons to use web scripts It's now time to discover the answer to the next question—why web scripts? There are various alternate approaches available to interact with the Alfresco repository, such as CMIS, SOAP-based web services, and web scripts. Generally, web scripts are always chosen as a preferred option among developers and architects when it comes to interacting with the Alfresco repository from an external application. Let's take a look at the various reasons behind choosing a web script as an option instead of CMIS and SOAP-based web services. In comparison with CMIS, web scripts are explained as follows: In general, CMIS is a generic implementation, and it basically provides a common set of services to interact with any content repository. It does not attempt to incorporate the services that expose all features of each and every content repository. It basically tries to cover a basic common set of functionalities for interacting with any content repository and provide the services to access such functionalities. Alfresco provides an implementation of CMIS for interacting with the Alfresco repository. Having a common set of repository functionalities exposed using CMIS implementation, it may be possible that sometimes CMIS will not do everything that you are aiming to do when working with the Alfresco repository. While with web scripts, it will be possible to do the things you are planning to implement and access the Alfresco repository as required. Hence, one of the best alternatives is to use Alfresco web scripts in this case and develop custom APIs as required, using the Alfresco web scripts. Another important thing to note is, with the transaction support of web scripts, it is possible to perform a set of operations together in a web script, whereas in CMIS, there is a limitation for the transaction usage. It is possible to execute each operation individually, but it is not possible to execute a set of operations together in a single transaction as possible in web scripts. SOAP-based web services are not preferable for the following reasons: It takes a long time to develop them They depend on SOAP Heavier client-side requirements They need to maintain the resource directory Scalability is a challenge They only support XML In comparison, web scripts have the following properties: There are no complex specifications There is no dependency on SOAP There is no need to maintain the resource directory They are more scalable as there is no need to maintain session state They are a lightweight implementation They are simple and easy to develop They support multiple formats In a developer's opinion: They can be easily developed using any text editor No compilations required when using scripting language No need for server restarts when using scripting language No complex installations required In essence: Web scripts are a REST-based and powerful option to interact with the Alfresco repository in comparison to the traditional SOAP-based web services and CMIS alternatives They provide RESTful access to the content residing in the Alfresco repository and provide uniform access to a wide range of client applications They are easy to develop and provide some of the most useful features such as no server restart, no compilations, no complex installations, and no need of a specific tool to develop them All these points make web scripts the most preferred choice among developers and architects when it comes to interacting with the Alfresco repository Executing a web script from standalone Java program There are different options to invoke a web script from a Java program. Here, we will take a detailed walkthrough of the Apache commons HttpClient API with code snippets to understand how a web script can be executed from the Java program, and will briefly mention some other alternatives that can also be used to invoke web scripts from Java programs. HttpClient One way of executing a web script is to invoke web scripts using org.apache.commons.httpclient.HttpClient API. This class is available in commons-httpclient-3.1.jar. Executing a web script with HttpClient API also requires commons-logging-*.jar and commons-codec-*.jar as supporting JARs. These JARs are available at the tomcatwebappsalfrescoWEB-INFlib location inside your Alfresco installation directory. You will need to include them in the build path for your project. We will try to execute the hello world web script using the HttpClient from a standalone Java program. While using HttpClient, here are the steps in general you need to follow: Create a new instance of HttpClient. The next step is to create an instance of method (we will use GetMethod). The URL needs to be passed in the constructor of the method. Set any arguments if required. Provide the authentication details if required. Ask HttpClient to now execute the method. Read the response status code and response. Finally, release the connection. Understanding how to invoke a web script using HttpClient Let's take a look at the following code snippet considering the previous mentioned steps. In order to test this, you can create a standalone Java program with a main method and put the following code snippet in Java program and then modify the web script URLs/credentials as required. Comments are provided in the following code snippet for you to easily correlate the previous steps with the code: // Create a new instance of HttpClient HttpClient objHttpClient = new HttpClient(); // Create a new method instance as required. Here it is GetMethod. GetMethod objGetMethod = new GetMethod("http://localhost:8080/alfresco/service/helloworld"); // Set querystring parameters if required. objGetMethod.setQueryString(new NameValuePair[] { new NameValuePair("name", "Ramesh")}); // set the credentials if authentication is required. Credentials defaultcreds = new UsernamePasswordCredentials("admin","admin"); objHttpClient.getState().setCredentials(new AuthScope("localhost",8080, AuthScope.ANY_REALM), defaultcreds); try { // Now, execute the method using HttpClient. int statusCode = objHttpClient.executeMethod(objGetMethod); if (statusCode != HttpStatus.SC_OK) { System.err.println("Method invocation failed: " + objGetMethod.getStatusLine()); } // Read the response body. byte[] responseBody = objGetMethod.getResponseBody(); // Print the response body. System.out.println(new String(responseBody)); } catch (HttpException e) { System.err.println("Http exception: " + e.getMessage()); e.printStackTrace(); } catch (IOException e) { System.err.println("IO exception transport error: " + e.getMessage()); e.printStackTrace(); } finally { // Release the method connection. objGetMethod.releaseConnection(); } Note that the Apache commons client is a legacy project now and is not being developed anymore. This project has been replaced by the Apache HttpComponents project in HttpClient and HttpCore modules. We have used HttpClient from Apache commons client here to get an overall understanding. Some of the other options that you can use to invoke web scripts from a Java program are mentioned in subsequent sections. URLConnection One option to execute web script from Java program is by using java.net.URLConnection. For more details, you can refer to http://docs.oracle.com/javase/tutorial/networking/urls/readingWriting.html. Apache HTTP components Another option to execute web script from Java program is to use Apache HTTP components that are the latest available APIs for HTTP communication. These components offer better performance and more flexibility and are available in httpclient-*.jar and httpcore-*.jar. These JARs are available at the tomcatwebappsalfrescoWEBINFlib location inside your Alfresco installation directory. For more details, refer to https://hc.apache.org/httpcomponents-client-4.3.x/quickstart.html to get an understanding of how to execute HTTP calls from a Java program. RestTemplate Another alternative would be to use org.springframework.web.client.RestTemplate available in org.springframework.web-*.jar located at tomcatwebappsalfrescoWEB-INFlib inside your Alfresco installation directory. If you are using Alfresco community 5, the RestTemplate class is available in spring-web-*.jar. Generally, RestTemplate is used in Spring-based services to invoke an HTTP communication. Calling web scripts from Spring-based services If you need to invoke an Alfresco web script from Spring-based services, then you need to use RestTemplate to invoke HTTP calls. This is the most commonly used technique to execute HTTP calls from Spring-based classes. In order to do this, the following are the steps to be performed. The code snippets are also provided: Define RestTemplate in your Spring context file: <bean id="restTemplate" class="org.springframework.web.client.RestTemplate" /> In the Spring context file, inject restTemplate in your Spring class as shown in the following example: <bean id="httpCommService" class="com.test.HTTPCallService"> <property name="restTemplate" value="restTemplate" /> </bean> In the Java class, define the setter method for restTemplate as follows: private RestTemplate restTemplate; public void setRestTemplate(RestTemplate restTemplate) {    this.restTemplate = restTemplate; } In order to invoke a web script that has an authentication level set as user authentication, you can use RestTemplate in your Java class as shown in the following code snippet. The following code snippet is an example to invoke the hello world web script using RestTemplate from a Spring-based service: // setup authentication String plainCredentials = "admin:admin"; byte[] plainCredBytes = plainCredentials.getBytes(); byte[] base64CredBytes = Base64.encodeBase64(plainCredBytes); String base64Credentials = new String(base64CredBytes); // setup request headers HttpHeaders reqHeaders = new HttpHeaders(); reqHeaders.add("Authorization", "Basic " + base64Credentials); HttpEntity<String> requestEntity = new HttpEntity<String>(reqHeaders); // Execute method ResponseEntity<String> responseEntity = restTemplate.exchange("http://localhost:8080/alfresco/service/helloworld?name=Ramesh", HttpMethod.GET, requestEntity, String.class); System.out.println("Response:"+responseEntity.getBody()); Invoking a web script from Alfresco Share When working on customizing Alfresco Share, you will need to make a call to Alfresco repository web scripts. In Alfresco Share, you can invoke repository web scripts from two places. One is the component level the presentation web scripts, and the other is client-side JavaScript. Calling a web script from presentation web script JavaScript controller Alfresco Share renders the user interface using the presentation web scripts. These presentation web scripts make a call to the repository web script to render the repository data. Repository web script is called before the component rendering file (for example, get.html.ftl) loads. In out-of-the-box Alfresco installation, you should be able to see the components’ presentation web script available under tomcatwebappsshareWEB-INFclassesalfrescosite-webscripts. When developing a custom component, you will be required to write a presentation web script. A presentation web script will make a call to the repository web script. You can make a call to the repository web script as follows: var reponse = remote.call("url of web script as defined in description document"); var obj = eval('(' + response + ')'); In the preceding code snippet, we have used the out-of-the-box available remote object to make a repository web script call. The important thing to notice is that we have to provide the URL of the web script as defined in the description document. There is no need to provide the initial part such as host or port name, application name, and service path the way we use while calling web script from a web browser. Once the response is received, web script response can be parsed with the use of the eval function. In the out-of-the-box code of Alfresco Share, you can find the presentation web scripts invoking the repository web scripts, as we have seen in the previous code snippet. For example, take a look at the main() method in the site-members.get.js file, which is available at the tomcatwebappssharecomponentssite-members location inside your Alfresco installed directory. You can take a look at the other JavaScript controller implementation for out-of-the-box presentation web scripts available at tomcatwebappsshareWEB-INFclassesalfrescosite-webscripts making repository web script calls using the previously mentioned technique. When specifying the path to provide references to the out-of-the-box web scripts, it is mentioned starting with tomcatwebapps. This location is available in your Alfresco installation directory. Invoking a web script from client-side JavaScript The client-side JavaScript control file can be associated with components in Alfresco Share. If you need to make a repository web script call, you can do this from the client-side JavaScript control files generally located at tomcatwebappssharecomponents. There are different ways you can make a repository web script call using a YUI-based client-side JavaScript file. The following are some of the ways to do invoke web script from client-side JavaScript files. References are also provided along with each of the ways to look in the Alfresco out-of-the-box implementation to understand its usage practically: Alfresco.util.Ajax.request: Take a look at tomcatwebappssharecomponentsconsolegroups.js and refer to the _removeUser function. Alfresco.util.Ajax.jsonRequest: Take a look at tomcatwebappssharecomponentsdocumentlibrarydocumentlist.js and refer to the onOptionSelect function. Alfresco.util.Ajax.jsonGet: To directly make a call to get web script, take a look at tomcatwebappssharecomponentsconsolegroups.js and refer to the getParentGroups function. YAHOO.util.Connect.asyncRequest: Take a look at tomcatwebappssharecomponentsdocumentlibrarytree.js and refer to the _sortNodeChildren function. In alfresco.js located at tomcatwebappssharejs, the wrapper implementation of YAHOO.util.Connect.asyncRequest is provided and various available methods such as the ones we saw in the preceding list, Alfresco.util.Ajax.request, Alfresco.util.Ajax.jsonRequest, and Alfresco.util.Ajax.jsonGet can be found in alfresco.js. Hence, the first three options in the previous list internally make a call using the YAHOO.util.Connect.asyncRequest (the last option in the previous list) only. Calling a web script from the command line Sometimes while working on your project, it might be required that from the Linux machine you need to invoke a web script or create a shell script to invoke a web script. It is possible to invoke a web script from the command line using cURL, which is a valuable tool to use while working on web scripts. You can install cURL on Linux, Mac, or Windows and execute a web script from the command line. Refer to http://curl.haxx.se/ for more details on cURL. You will be required to install cURL first. On Linux, you can install cURL using apt-get. On Mac, you should be able to install cURL through MacPorts and on Windows using Cygwin you can install cURL. Once cURL is installed, you can invoke web script from the command line as follows: curl -u admin:admin "http://localhost:8080/alfresco/service/helloworld?name=Ramesh" This will display the web script response. DeclarativeWebScript versus AbstractWebScript The web script framework in Alfresco provides two different helper classes from which the Java-backed controller can be derived. It's important to understand the difference between them. The first helper class is the one we used while developing the web script in this article, org.springframework.extensions.webscripts.DeclarativeWebScript. The second one is org.springframework.extensions.webscripts.AbstractWebScript. DeclarativeWebScript in turn only extends the AbstractWebScript class. If the Java-backed controller is derived from DeclarativeWebScript, then execution assistance is provided by the DeclarativeWebScript class. This helper class basically encapsulates the execution of the web script and checks if any controller written in JavaScript is associated with the web script or not. If any JavaScript controller is found for the web script, then this helper class will execute it. This class will locate the associated response template of the web script for the requested format and will pass the populated model object to the response template. For the controller extending DeclarativeWebScript, the controller logic for a web script should be provided in the Map<String, Object> executeImpl(WebScriptRequest req, Status status, Cache cache) method. Most of the time while developing a Java-backed web script, the controller will extend DeclarativeWebScript only. AbstractWebScript does not provide execution assistance in the way DeclarativeWebScript does. It gives full control over the entire execution process to the derived class and allows the extending class to decide how the output is to be rendered. One good example of this is the DeclarativeWebScript class itself. It extends the AbstractWebScript class and provides a mechanism to render the response using FTL templates. In a scenario like streaming the content, there won't be any need for a response template; instead, the content itself needs to be rendered directly. In this case, the Java-backed controller class can extend from AbstractWebScript. If a web script has both a JavaScript-based controller and a Java-backed controller, then: If a Java-backed controller is derived from DeclarativeWebScript, then first the Java-backed controller will get executed and then the control would be passed to the JavaScript-backed controller prior to returning the model object to the response template. If the Java-backed controller is derived from AbstractWebScript, then, only the Java-backed controller will be executed. The JavaScript controller will not get executed. Summary In this article, we took a look at the reasons of using web scripts. Then we executed a web script from standalone Java program and move on to invoke a web script from Alfresco Share. Lastly, we saw the difference between DeclarativeWebScript versus AbstractWebScript. Resources for Article: Further resources on this subject: Alfresco 3 Business Solutions: Types of E-mail Integration [article] Alfresco 3: Writing and Executing Scripts [article] Overview of REST Concepts and Developing your First Web Script using Alfresco [article]
Read more
  • 0
  • 0
  • 12450
Modal Close icon
Modal Close icon