Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-introducing-windows-store
Packt
12 Sep 2013
17 min read
Save for later

Introducing the Windows Store

Packt
12 Sep 2013
17 min read
(For more resources related to this topic, see here.) Developing a Windows Store app is not just about design, coding, and markup. A very essential part of the process that leads to a successful app is done on the Windows Store Dashboard. It is the place where you submit the app, pave its way to the market, and monitor how it is doing there. Also, it is the place where you can get all the information about your existing apps and where you can plan your next app. The submission process is broken down into seven phases. If you haven't already opened a Windows Store developer account, now is the time to do so because you will need it to access your Dashboard. Before you sign up, make sure you have a credit card. The Windows Store requires a credit card to open a developer account even if you had a registration code that entitles you to a free registration. Once signed in, locate your app listed on the home page under the Apps in progress section and click on Edit. This will direct you to the Release Summary page and the app will be titled AppName: Release 1. The release number will auto-increment each time you submit a new release for the same app. The Release Summary page lists the steps that will get your app ready for Windows Store certification. On this page, you can enter all the info about your Windows Store app and upload its packages for certification. At the moment you will notice that the two buttons at the bottom of the page labeled as Review release info and Submit app for certification are disabled and will remain so until all the previous steps have been marked Complete. The submission progress can always be saved to be resumed later, so it is not necessarily a one-time mission. We'll go over these steps one by one: App name: This is the first step and it includes reserving a unique name for the app. Selling details: This step includes selecting the following: The app price tier option sets the price of your app (for example, free or 1.99 USD). The free trial period option is the number of days the customer can use the app before they start paying to use it. This option is enabled only if the app price tier is not set to Free. The Market where you would like the app to be listed in the Windows Store. Bear in mind that if your app isn't free, your developer account must have a valid tax profile for each country/region you select. The release date option specifies the earliest date when the app will be listed in the Windows Store. The default option is to release as soon as the app passes the certification. The App category and subcategory option indicates where your app be listed in the Store, which in turn lists the apps under Categories. The Hardware requirements option will specify the minimum requirements for the DirectX feature level and the system RAM. The Accessibility option is a checkbox that when checked indicates that the app has been tested to meet accessibility guidelines. Services: In this step, you can add services to your app such as Windows Azure Mobile Services and Live Services. You can also provide products and features that the customer can buy from within the app called In-app offers. Age rating and rating certificates: In this step, you can set an age rating for the app from the available Windows Store age ratings. Also, you can upload country/region-specific rating certificates in case your app is a game. Cryptography: In this step, you specify if your app calls, supports, and contains or uses cryptography or encryption. The following are some of the examples of how an app might apply cryptography or encryption: Use of a digital signature such as authentication or integrity checking Encryption of any data or files that your app uses or accesses Key management, certificate management, or anything that interacts with a public key infrastructure Using a secure communication channel such as NTLM, Kerberos, Secure Sockets Layer (SSL), or Transport Layer Security (TLS) Encrypting passwords or other forms of information security Copy protection or digital rights management (DRM) Antivirus protection Packages: In this step, you can upload your app to the Store by uploading the .appxupload file that was created in Visual Studio during the package-creation process. We will shortly see how to create an app package. The latest upload will show on the Release Summary page in the packages box and should be labeled as Validation Complete. Description: In this step you can add a brief description (mandatory) on what the app does for your customers. The description has a 10,000-character limit and will be displayed in the details page of the app's listing in the Windows Store. Besides description, this step contains the following features: App features: This feature is optional. It allows you to list up to 20 of the app's key features. Screenshots: This feature is mandatory and requires to provide at least one .png file image; the first can be a graphic that represents your app but all the other images must be screenshots with a caption taken directly from the app. Notes: This feature is optional. Enter any other info that you think your customer needs to know; for example, changes in an update. Recommended hardware: This feature is optional. List the hardware configurations that the app will need to run. Keywords: This feature is optional. Enter keywords related to the app to help its listing appear in search results. Copyright and trademark info: This feature is mandatory. Enter the copyright and trademark info that will be displayed to customers in the app's listing page. Additional license terms: This feature is optional. Enter any changes to the Standard App License Terms that the customers accept when they acquire this app. Promotional images: This feature is optional. Add images that the editors use to feature apps in the Store. Website: This feature is optional. Enter the URL of the web page that describes the app, if any. Support contact info: This feature is mandatory. Enter the support contact e-mail address or URL of the web page where your customers can reach out for help. Privacy policy: This feature is optional. Enter the URL of the web page that contains the privacy policy. Notes to testers: This is the last step and it includes adding notes about this specific release for those who will review your app from the Windows Store team. The info will help the testers understand and use this app in order to complete their testing quickly and certify the app for the Windows Store. Each step will remain disabled until the preceding one is completed and the steps that are in progress are labeled with the approximate time (in minutes) it will take you to finish it. And whenever the work in a single step is done, it will be marked Complete on the summary page as shown in the following screenshot: Submitting the app for certification After all the steps are marked Complete, you can submit the app for certification. Once you click on Submit for certification, you will receive an e-mail notification that the Windows Store has received your app for certification. The dashboard will submit the app and you will be directed to the Certification status page. There, you can view the progress of the app during the certification process, which includes the following steps: Pre-processing: This step will check if you have entered all the required details that are needed to publish the app. Security tests: This step tests your app against viruses and malware. Technical compliance: This step involves the Windows App certification Kit to check if the app complies with the technical policies. The same assessment can be run locally using Visual Studio, which we will see shortly, before you upload your package. Content compliance: This step is done by testers from the Store team who will check if the contents available in the app comply with the content policies set by Microsoft. Release: This step involves releasing the app; it shouldn't take much time unless the publish date you've specified in Selling details is in the future, in which case the app will remain in this stage until that date arrives. Signing and publishing: This is the final step in the certification process. At this stage, the packages you submitted will be signed with a trusted certificate that matches the technical details of your developer account, thus guaranteeing for the potential customers and viewers that the app is certified by the Windows Store. The following screenshot shows the certification process on Windows Store Dashboard: No need to wait on that page; you can click on the Go to dashboard button and you will be redirected to the My apps page. In the box containing the app you just submitted, you will notice that the Edit and Delete links are gone, and instead there is only the Status link, which will take you to the Certification status page. Additionally, a Notifications section will appear on this page and will list status notifications about the app you just submitted, for example: BookTestApp: Release 1 submitted for certification. 6/4/2013 When the certification process is completed, you will be notified via e-mail with the result. Also, a notification will be added to the dashboard main page showing the result of the certification, either failed or succeeded, with a link to the certification report. In case the app fails, the certification reports will show you which part needs revisiting. Moreover, there are some resources to help you identify and fix the problems and errors that might arise during the certification process; these resources can be found at the Windows Dev Center page for Windows Store apps at the following location: http://msdn.microsoft.com/en-us/library/windows/apps/jj657968.aspx Also, you can always check your dashboard to check the status of your app during certification. After the certification process is completed successfully, the app package will be published to the Store with all the relevant data that will be visible in your app listing page. This page can be accessed by millions of Windows 8 users who will in turn be able to find, install, and use your app. Once the app has been published to the Store and it's up and running, you can start collecting telemetry data on how it is doing in the Store; these metrics include information on how many times the app has been launched, how long it has been running, and if it is crashing or encountering a JavaScript exception. Once you enable telemetry data collection, the Store will retrieve this info for your apps, analyze them, and summarize them in very informative reports on your dashboard. Now that we have covered almost everything you need to know about the process of submitting your app to the Windows Store, let us see what is needed to be done in Visual Studio. The Store within Visual Studio Windows Store can be accessed from within Visual Studio using the Store menu. Not all the things that we did on the dashboard can be done here; a few very important functionalities such as app package creation are provided by this menu. The Store menu can be located under the Project item in the menu bar using Visual Studio 2012 Ultimate, or if you are using Visual Studio 2012 Express, you can find it directly in the menu bar, and it will appear only if you're working on a Windows Store project or solution. We will get to see the commands provided by the Store menu in detail and the following is the screenshot that shows how the menu will look: The command options in the Store menu are as follows: Open Developer Account...: This option will open a web page that directs you to Windows Dev Center for Windows Store apps, where you can obtain a developer account for the Store. Reserve App Name...: This option will direct you to your Windows Store Dashboard and specifically to the Submit an app page, where you can start with the first step, reserving an app name. Acquire Developer License...: This option will open up a dialog window that will prompt you to sign in with your Microsoft Account; after you sign in, it will retrieve your developer license or renew it if you already have one. Edit App Manifest: This option will open a tab with Manifest Designer, so you can edit the settings in the app's manifest file. Associate App with the Store...: This option will open a wizard-like window in Visual Studio, containing the steps needed to associate an app with the Store. The first step will prompt you to sign in; afterwards, the wizard will retrieve the apps registered with the Microsoft Account you used to sign in. Select an app and the wizard will automatically download the following values to the app's manifest file for the current project on the local computer: Package's display name Package's name Publisher ID Publisher's display name Capture Screenshot...: This option will build the current app project and launch it in the simulator instead of the start screen. Once the simulator opens, you can use the Copy screenshot button on the simulator sidebar. This button will be used to take a screenshot of the running app that will save this image as a .png file. Create App Package...: This option will open a window containing the Create App Packages wizard that we will see shortly. Upload App Package...: This option will open a browser that directs you to the Release Summary page in the Windows Store Dashboard, if your Store account is all set and your app is registered. Otherwise, it will just take you to the sign-in page. In the Release Summary page, you can select Packages and from there upload your app package. Creating an App Package One of the most important utilities in the Store menu is the app package creation, which will build and create a package for the app that we can upload to the Store at a later stage. This package is consistent with all the app-specific and developer-specific details that the Store requires. Moreover, the developers do not have to worry about any of the intricacies of the whole package-creation process, which is abstracted for us and available via a wizard-link window. In the Create App Packages wizard, we can create an app package for the Windows Store directly, or create the one to be used for testing or local distribution. This wizard will prompt you to specify metadata for the app package. The following screenshot shows the first two steps involved in this process: In the first step, the wizard will ask you if you want to build packages to upload to the Windows Store; choose Yes if you want to build a package for the Store or choose No if you want a package for testing and local use. Taking the first scenario in consideration, click on Sign In to proceed and complete the sign-in process using your Microsoft Account. After a successful sign-in, the wizard will prompt you to select the app name (step 2 of the preceding screenshot) either by clicking on the apps listed in the wizard or choosing the Reserve Name link that will direct you to the Windows Store Dashboard to complete the process and reserve a new app name. The following screenshot shows step 3 and step 4: Step 3 contains the Select and Configure Packages section in which we will select Output location that points to where the package files will be created. Also, in this section we can enter a version number for this package or chose to make it auto-increment each time we package the app. Additionally, we can select the build configuration we want for the package from the Neutral, ARM, x64, and x86 options and by default, the current active project platform will be selected and a package will be produced for each configuration type selected. The last option in this section is the Include public symbol files option. Selecting this option will generate the public symbols files (.pdb) and add it to the package, which will help the store later in analyzing your app and will be used to map crashes of your app. Finally, click on Create and wait while the packaging is being processed. Once completed, the Package Creation Completed section appears (step 4) and will show Output location as a link that will direct you to the package files. Also, there is a button to directly launch the Windows App Certification Kit. Windows App Certification Kit will validate the app package against the Store requirements and generate a report of the validation. The following screenshot shows the window containing the Windows App Certification Kit process: Alternatively, there is a second scenario for creating an app package but it is more aimed at testing, which is identical to the process we just saw except that you have to choose No in the first page on the wizard and there is no need to sign-in with the Microsoft Account. This option will end the wizard when the package creation has completed and display the link to the output folder but you will not be able to launch the Windows App Certification Kit. The packages created with this option can only be used on a computer that has a developer license installed. This scenario will be used more often since the package for the Store should ideally be tested locally first. After creating the app package for testing or local distribution, you can install it on a local machine or device. Let's install the package locally. Start the Create App Packages wizard; choose No in the first step, complete the wizard, and find files of the app package just created in the output folder that you specified for the package location. Name this as PackageName_Test. This folder will contain an .appx file, a security certificate, a Windows PowerShell script, and other files. The Windows PowerShell script generated with the app package will be used to install the package for testing. Navigate to the Output folder and install the app package. Locate and select the script file named Add-AppDevPackage, and then right-click and choose Run with PowerShell as shown in the following screenshot: Run the script and it will perform the following steps: It displays information about Execution Policy Change and prompts about changing the execution policy. Enter Y to continue. It checks if you have a developer license; in case there wasn't any script, it will prompt you to get one. It checks and verifies whether the app package and the required certificates are present; if any item is missing, you will be notified to install them before the developer package is installed. It checks for and installs any dependency packages such as the WinJS library. It displays the message Success: Your package was successfully installed. Press Enter to continue and the window will close. The aforementioned steps are shown in the following screenshot: Once the script has completed successfully, you can look for your app on the Start screen and start it. Note that for users who are on a network and don't have permission to access the directory where the Add-AppDevPackage PowerShell script file is located, an error message might appear. This issue can be solved by simply copying the contents of the output folder to the local machine before running the script. Also, for any security-related issues, you might want to consult the Windows Developer Center for solutions. Summary In this article, we saw the ins and outs of the Windows Store Dashboard and we covered the steps of the app submission process leading to the publishing of the app in the Store. We also learned about the Store menu in Visual Studio and the options it provides to interact with the dashboard. Moreover, we learned how to create app packages and how to deploy the app locally for testing. Resources for Article: Further resources on this subject: WPF 4.5 Application and Windows [Article] HTML5 Canvas [Article] Responsive Design with Media Queries [Article]
Read more
  • 0
  • 0
  • 2442

article-image-slider-dynamic-applications-using-scriptaculous-part-1
Packt
08 Oct 2009
5 min read
Save for later

Slider for Dynamic Applications using script.aculo.us (part 1)

Packt
08 Oct 2009
5 min read
Before we start exploring the slider, let me try to give you a complete picture of its functionality with a simple example. Google Finance uses a horizontal slider, showing the price at a given day, month, and year. Although this particular module is built in Flash, we can build a similar module using the script.aculo.us slider too. To understand the concept and how it works, look at the following screenshot: Now that we have a clear understanding of what the slider is and how it appears in UI, let's get started! First steps with slider As just explained, a slider can handle a single value or a set of values. It's important to understand at this point of time that unlike other features of script.aculo.us, a slider is used in very niche applications for a specific functionality. The slider is not just mere functionality, but is the behavior of the users and the application. A typical constructor syntax definition for the slider is shown as follows: new Control.Slider(handle, track [ , options ] ); Track mostly represents the <div> element. Handle represents the element inside the track and, as usual, a large number of options for us to fully customize our slider. For now, we will focus on understanding the concepts and fundamentals of the slider. We will surely have fun playing with code in our Code usage for the slider section. Parameters for the slider definition In this section we will look at the parameters required to define the slider constructor: track in a slider represents a range handle in a slider represents the sliding along the track, that is, within a particular range and holding the current value options in a slider are provided to fully customize our slider's look and feel as well as functionality It's time to put the theory into action. We need the appropriate markup for working with the slider. We have <div> for the track and one <div> for each handle. The resulting code should look like the snippet shown as follows: <div id="track"><div id="handle1"></div></div> It is possible to have multiple handles inside a single track. The following code snippet is a simple example: <div id="track"><div id="handle1"></div><div id="handle2"></div></div> Options with the slider Like all the wonderful features of script.aculo.us, the slider too comes with a large number of options that allow us to create multiple behaviours for the slider. They are: Axis: This defines the orientation of the slider. The direction of movement could be horizontal or vertical. By default it is horizontal. Increment: This defines the relation between value and pixels. Maximum: This is the maximum value set for the slider to move to. While using a vertical slider from top-to-bottom, the bottom most value will be the maximum. And for a horizontal slider from left-to-right, the right most value will be the maximum value. Minimum: This is the minimum value set for the slider to move to. While using a vertical slider from top-to-bottom, the top most value will be the minimum. And for a horizontal slider from left-to-right, the left most value will be the minimum value approach for horizontal slider. Range: This is the fixed bandwidth allowed for the values. Define the minimum and maximum values. Values: Instead of a range, pass a set of values as an array. SliderValue: This sets the initial value of the slider. If not set, will take the extreme value of the slide as the default value. Disabled: As the name suggests, this disables the slider functionality. Some of the functions offered by the slider are: setValue:This will set the value of the slider directly and move it to the value position. setDisabled: This defines that the slider is disabled at runtime. setEnabled: This can enable the slider at runtime. Some of the callbacks supported by the slider are: onSlide: This is initiated on every slide movement. The called function would get the current slider value as parameter onChange: Whenever the value of the slider is changed, the called function is invoked. The value can change due to the slider movement or by passing the setValue function. Types of slider script.aculo.us provides us the flexibility and comfort of two different orientations for the slider: Vertical slider Horizontal slider Vertical slider When the axis orientation of a slider is defined as vertical, the slider becomes and acts as a vertical slider. Horizontal slider When the axis orientation of a slider is defined as horizontal, the slider becomes and acts as a horizontal slider. So let's get our hands dirty with code and start defining the constructors for horizontal and vertical slider with options. Trust me this will be fun.
Read more
  • 0
  • 0
  • 2437

article-image-building-content-management-system
Packt
25 Sep 2014
25 min read
Save for later

Building a Content Management System

Packt
25 Sep 2014
25 min read
In this article by Charles R. Portwood II, the author of Yii Project Blueprints, we will look at how to create a feature-complete content management system and blogging platform. (For more resources related to this topic, see here.) Describing the project Our CMS can be broken down into several different components: Users who will be responsible for viewing and managing the content Content to be managed Categories for our content to be placed into Metadata to help us further define our content and users Search engine optimizations Users The first component of our application is the users who will perform all the tasks in our application. For this application, we're going to largely reuse the user database and authentication system. In this article, we'll enhance this functionality by allowing social authentication. Our CMS will allow users to register new accounts from the data provided by Twitter; after they have registered, the CMS will allow them to sign-in to our application by signing in to Twitter. To enable us to know if a user is a socially authenticated user, we have to make several changes to both our database and our authentication scheme. First, we're going to need a way to indicate whether a user is a socially authenticated user. Rather than hardcoding a isAuthenticatedViaTwitter column in our database, we'll create a new database table called user_metadata, which will be a simple table that contains the user's ID, a unique key, and a value. This will allow us to store additional information about our users without having to explicitly change our user's database table every time we want to make a change: ID INTEGER PRIMARY KEYuser_id INTEGERkey STRINGvalue STRINGcreated INTEGERupdated INTEGER We'll also need to modify our UserIdentity class to allow socially authenticated users to sign in. To do this, we'll be expanding upon this class to create a RemoteUserIdentity class that will work off the OAuth codes that Twitter (or any other third-party source that works with HybridAuth) provide to us rather than authenticating against a username and password. Content At the core of our CMS is our content that we'll manage. For this project, we'll manage simple blog posts that can have additional metadata associated with them. Each post will have a title, a body, an author, a category, a unique URI or slug, and an indication whether it has been published or not. Our database structure for this table will look as follows: ID INTEGER PRIMARY KEYtitle STRINGbody TEXTpublished INTEGERauthor_id INTEGERcategory_id INTEGERslug STRINGcreated INTEGERupdated INTEGER Each post will also have one or many metadata columns that will further describe the posts we'll be creating. We can use this table (we’ll call it content_metadata) to have our system store information about each post automatically for us, or add information to our posts ourselves, thereby eliminating the need to constantly migrate our database every time we want to add a new attribute to our content: ID INTEGER PRIMARY KEYcontent_id INTEGERkey STRINGvalue STRINGcreated INTEGERupdated INTEGER Categories Each post will be associated with a category in our system. These categories will help us further refine our posts. As with our content, each category will have its own slug. Before either a post or a category is saved, we'll need to verify that the slug is not already in use. Our table structure will look as follows: ID INTEGER PRIMARY KEYname STRINGdescription TEXTslug STRINGcreated INTEGERupdated INTEGER Search engine optimizations The last core component of our application is optimization for search engines so that our content can be indexed quickly. SEO is important because it increases our discoverability and availability both on search engines and on other marketing materials. In our application, there are a couple of things we'll perform to improve our SEO: The first SEO enhancement we'll add is a sitemap.xml file, which we can submit to popular search engines to index. Rather than crawl our content, search engines can very quickly index our sitemap.xml file, which means that our content will show up in search engines faster. The second enhancement we'll be adding is the slugs that we discussed earlier. Slugs allow us to indicate what a particular post is about directly from a URL. So rather than have a URL that looks like http://chapter6.example.com/content/post/id/5, we can have URL's that look like: http://chapter6.example.com/my-awesome-article. These types of URLs allow search engines and our users to know what our content is about without even looking at the content itself, such as when a user is browsing through their bookmarks or browsing a search engine. Initializing the project To provide us with a common starting ground, a skeleton project has been included with the project resources for this article. Included with this skeleton project are the necessary migrations, data files, controllers, and views to get us started with developing. Also included in this skeleton project are the user authentication classes. Copy this skeleton project to your web server, configure it so that it responds to chapter6.example.com as outlined at the beginning of the article, and then perform the following steps to make sure everything is set up: Adjust the permissions on the assets and protected/runtime folders so that they are writable by your web server. In this article, we'll once again use the latest version of MySQL (at the time of writing MySQL 5.6). Make sure that your MySQL server is set up and running on your server. Then, create a username, password, and database for our project to use, and update your protected/config/main.php file accordingly. For simplicity, you can use ch6_cms for each value. Install our Composer dependencies: Composer install Run the migrate command and install our mock data: php protected/yiic.php migrate up --interactive=0psql ch6_cms -f protected/data/postgres.sql Finally, add your SendGrid credentials to your protected/config/params.php file: 'username' => '<username>','password' => '<password>','from' => 'noreply@ch6.home.erianna.net') If everything is loaded correctly, you should see a 404 page similar to the following: Exploring the skeleton project There are actually a lot of different things going on in the background to make this work even if this is just a 404 error. Before we start doing any development, let's take a look at a few of the classes that have been provided in our skeleton project in the protected/components folder. Extending models from a common class The first class that has been provided to us is an ActiveRecord extension called CMSActiveRecord that all of our models will stem from. This class allows us to reduce the amount of code that we have to write in each class. For now, we'll simply add CTimestampBehavior and the afterFind() method to store the old attributes for the time the need arises to compare the changed attributes with the new attributes: class CMSActiveRecordCMSActiveRecord extends CActiveRecord{public $_oldAttributes = array();public function behaviors(){return array('CTimestampBehavior' => array('class' => 'zii.behaviors.CTimestampBehavior','createAttribute' => 'created','updateAttribute' => 'updated','setUpdateOnCreate' => true));}public function afterFind(){if ($this !== NULL)$this->_oldAttributes = $this->attributes;return parent::afterFind();}} Creating a custom validator for slugs Since both Content and Category classes have slugs, we'll need to add a custom validator to each class that will enable us to ensure that the slug is not already in use by either a post or a category. To do this, we have another class called CMSSlugActiveRecord that extends CMSActiveRecord with a validateSlug() method that we'll implement as follows: class CMSSLugActiveRecord extends CMSActiveRecord{public function validateSlug($attributes, $params){// Fetch any records that have that slug$content = Content::model()->findByAttributes(array('slug' =>$this->slug));$category = Category::model()->findByAttributes(array('slug' =>$this->slug));$class = strtolower(get_class($this));if ($content == NULL && $category == NULL)return true;else if (($content == NULL && $category != NULL) || ($content !=NULL && $category == NULL)){$this->addError('slug', 'That slug is already in use');return false;}else{if ($this->id == $$class->id)return true;}$this->addError('slug', 'That slug is already in use');return false;}} This implementation simply checks the database for any item that has that slug. If nothing is found, or if the current item is the item that is being modified, then the validator will return true. Otherwise, it will add an error to the slug attribute and return false. Both our Content model and Category model will extend from this class. View management with themes One of the largest challenges of working with larger applications is changing their appearance without locking functionality into our views. One way to further separate our business logic from our presentation logic is to use themes. Using themes in Yii, we can dynamically change the presentation layer of our application simply utilizing the Yii::app()->setTheme('themename) method. Once this method is called, Yii will look for view files in themes/themename/views rather than protected/views. Throughout the rest of the article, we'll be adding views to a custom theme called main, which is located in the themes folder. To set this theme globally, we'll be creating a custom class called CMSController, which all of our controllers will extend from. For now, our theme name will be hardcoded within our application. This value could easily be retrieved from a database though, allowing us to dynamically change themes from a cached or database value rather than changing it in our controller. Have a look at the following lines of code: class CMSController extends CController{public function beforeAction($action){Yii::app()->setTheme('main');return parent::beforeAction($action);}} Truly dynamic routing In our previous applications, we had long, boring URL's that had lots of IDs and parameters in them. These URLs provided a terrible user experience and prevented search engines and users from knowing what the content was about at a glance, which in turn would hurt our SEO rankings on many search engines. To get around this, we're going to heavily modify our UrlManager class to allow truly dynamic routing, which means that, every time we create or update a post or a category, our URL rules will be updated. Telling Yii to use our custom UrlManager Before we can start working on our controllers, we need to create a custom UrlManager to handle routing of our content so that we can access our content by its slug. The steps are as follows: The first change we need to make to allow for this routing is to update the components section of our protected/config/main.php file. This will tell Yii what class to use for the UrlManager component: 'urlManager' => array('class' => 'application.components.CMSUrlManager','urlFormat' => 'path','showScriptName' => false) Next, within our protected/components folder, we need to create CMSUrlManager.php: class CMSUrlManager extends CUrlManager {} CUrlManager works by populating a rules array. When Yii is bootstrapped, it will trigger the processRules() method to determine which route should be executed. We can overload this method to inject our own rules, which will ensure that the action that we want to be executed is executed. To get started, let's first define a set of default routes that we want loaded. The routes defined in the following code snippet will allow for pagination on our search and home page, enable a static path for our sitemap.xml file, and provide a route for HybridAuth to use for social authentication: public $defaultRules = array('/sitemap.xml' => '/content/sitemap','/search/<page:d+>' => '/content/search','/search' => '/content/search','/blog/<page:d+>' => '/content/index','/blog' => '/content/index','/' => '/content/index','/hybrid/<provider:w+>' => '/hybrid/index',); Then, we'll implement our processRules() method: protected function processRules() {} CUrlManager already has a public property that we can interface to modify the rules, so we'll inject our own rules into this. The rules property is the same property that can be accessed from within our config file. Since processRules() gets called on every page load, we'll also utilize caching so that our rules don't have to be generated every time. We'll start by trying to load any of our pregenerated rules from our cache, depending upon whether we are in debug mode or not: $this->rules = !YII_DEBUG ? Yii::app()->cache->get('Routes') : array(); If the rules we get back are already set up, we'll simple return them; otherwise, we'll generate the rules, put them into our cache, and then append our basic URL rules: if ($this->rules == false || empty($this->rules)) { $this->rules = array(); $this->rules = $this->generateClientRules(); $this->rules = CMap::mergearray($this->addRssRules(), $this- >rules); Yii::app()->cache->set('Routes', $this->rules); } $this->rules['<controller:w+>/<action:w+>/<id:w+>'] = '/'; $this->rules['<controller:w+>/<action:w+>'] = '/'; return parent::processRules(); For abstraction purposes, within our processRules() method, we've utilized two methods we'll need to create: generateClientRules, which will generate the rules for content and categories, and addRSSRules, which will generate the RSS routes for each category. The first method, generateClientRules(), simply loads our default rules that we defined earlier with the rules generated from our content and categories, which are populated by the generateRules() method: private function generateClientRules() { $rules = CMap::mergeArray($this->defaultRules, $this->rules); return CMap::mergeArray($this->generateRules(), $rules); } private function generateRules() { return CMap::mergeArray($this->generateContentRules(), $this- >generateCategoryRules()); } The generateRules() method, that we just defined, actually calls the methods that build our routes. Each route is a key-value pair that will take the following form: array( '<slug>' => '<controller>/<action>/id/<id>' ) Content rules will consist of an entry that is published. Have a look at the following code: private function generateContentRules(){$rules = array();$criteria = new CDbCriteria;$criteria->addCondition('published = 1');$content = Content::model()->findAll($criteria);foreach ($content as $el){if ($el->slug == NULL)continue;$pageRule = $el->slug.'/<page:d+>';$rule = $el->slug;if ($el->slug == '/')$pageRule = $rule = '';$pageRule = $el->slug . '/<page:d+>';$rule = $el->slug;$rules[$pageRule] = "content/view/id/{$el->id}";$rules[$rule] = "content/view/id/{$el->id}";}return $rules;} Our category rules will consist of all categories in our database. Have a look at the following code: private function generateCategoryRules() { $rules = array(); $categories = Category::model()->findAll(); foreach ($categories as $el) { if ($el->slug == NULL) continue; $pageRule = $el->slug.'/<page:d+>'; $rule = $el->slug; if ($el->slug == '/') $pageRule = $rule = ''; $pageRule = $el->slug . '/<page:d+>'; $rule = $el->slug; $rules[$pageRule] = "category/index/id/{$el->id}"; $rules[$rule] = "category/index/id/{$el->id}"; } return $rules; } Finally, we'll add our RSS rules that will allow RSS readers to read all content for the entire site or for a particular category, as follows: private function addRSSRules() { $categories = Category::model()->findAll(); foreach ($categories as $category) $routes[$category->slug.'.rss'] = "category/rss/id/ {$category->id}"; $routes['blog.rss'] = '/category/rss'; return $routes; } Displaying and managing content Now that Yii knows how to route our content, we can begin work on displaying and managing it. Begin by creating a new controller called ContentController in protected/controllers that extends CMSController. Have a look at the following line of code: class ContentController extends CMSController {} To start with, we'll define our accessRules() method and the default layout that we're going to use. Here's how: public $layout = 'default';public function filters(){return array('accessControl',);}public function accessRules(){return array(array('allow','actions' => array('index', 'view', 'search'),'users' => array('*')),array('allow','actions' => array('admin', 'save', 'delete'),'users'=>array('@'),'expression' => 'Yii::app()->user->role==2'),array('deny', // deny all users'users'=>array('*'),),);} Rendering the sitemap The first method we'll be implementing is our sitemap action. In our ContentController, create a new method called actionSitemap(): public function actionSitemap() {} The steps to be performed are as follows: Since sitemaps come in XML formatting, we'll start by disabling WebLogRoute defined in our protected/config/main.php file. This will ensure that our XML validates when search engines attempt to index it: Yii::app()->log->routes[0]->enabled = false; We'll then send the appropriate XML headers, disable the rendering of the layout, and flush any content that may have been queued to be sent to the browser: ob_end_clean();header('Content-type: text/xml; charset=utf-8');$this->layout = false; Then, we'll load all the published entries and categories and send them to our sitemap view: $content = Content::model()->findAllByAttributes(array('published'=> 1));$categories = Category::model()->findAll();$this->renderPartial('sitemap', array('content' => $content,'categories' => $categories,'url' => 'http://'.Yii::app()->request->serverName .Yii::app()->baseUrl)) Finally, we have two options to render this view. We can either make it a part of our theme in themes/main/views/content/sitemap.php, or we can place it in protected/views/content/sitemap.php. Since a sitemap's structure is unlikely to change, let's put it in the protected/views folder: <?php echo '<?xml version="1.0" encoding="UTF-8"?>'; ?><urlset ><?php foreach ($content as $v): ?><url><loc><?php echo $url .'/'. htmlspecialchars(str_replace('/', '', $v['slug']), ENT_QUOTES, "utf-8"); ?></loc><lastmod><?php echo date('c',strtotime($v['updated']));?></lastmod><changefreq>weekly</changefreq><priority>1</priority></url><?php endforeach; ?><?php foreach ($categories as $v): ?><url><loc><?php echo $url .'/'. htmlspecialchars(str_replace('/', '', $v['slug']), ENT_QUOTES, "utf-8"); ?></loc><lastmod><?php echo date('c',strtotime($v['updated']));?></lastmod><changefreq>weekly</changefreq><priority>0.7</priority></url><?php endforeach; ?></urlset> You can now load http://chapter6.example.com/sitemap.xml in your browser to see the sitemap. Before you make your site live, be sure to submit this file to search engines for them to index. Displaying a list view of content Next, we'll implement the actions necessary to display all of our content and a particular post. We'll start by providing a paginated view of our posts. Since CListView and the Content model's search() method already provide this functionality, we can utilize those classes to generate and display this data: To begin with, open protected/models/Content.php and modify the return value of the search() method as follows. This will ensure that Yii's pagination uses the correct variable in our CListView, and tells Yii how many results to load per page. return new CActiveDataProvider($this, array('criteria' =>$criteria,'pagination' => array('pageSize' => 5,'pageVar' =>'page'))); Next, implement the actionIndex() method with the $page parameter. We've already told our UrlManager how to handle this, which means that we'll get pretty URI's for pagination (for example, /blog, /blog/2, /blog/3, and so on): public function actionIndex($page=1){// Model Search without $_GET params$model = new Content('search');$model->unsetAttributes();$model->published = 1;$this->render('//content/all', array('dataprovider' => $model->search()));} Then we'll create a view in themes/main/views/content/all.php, that will display the data within our dataProvider: <?php $this->widget('zii.widgets.CListView', array('dataProvider'=>$dataprovider,'itemView'=>'//content/list','summaryText' => '','pager' => array('htmlOptions' => array('class' => 'pager'),'header' => '','firstPageCssClass'=>'hide','lastPageCssClass'=>'hide','maxButtonCount' => 0))); Finally, copy themes/main/views/content/all.php from the project resources folder so that our views can render. Since our database has already been populated with some sample data, you can start playing around with the results right away, as shown in the following screenshot: Displaying content by ID Since our routing rules are already set up, displaying our content is extremely simple. All that we have to do is search for a published model with the ID passed to the view action and render it: public function actionView($id=NULL){// Retrieve the data$content = Content::model()->findByPk($id);// beforeViewAction should catch thisif ($content == NULL || !$content->published)throw new CHttpException(404, 'The article you specified doesnot exist.');$this->render('view', array('id' => $id,'post' => $content));} After copying themes/main/views/content/view.php from the project resources folder into your project, you'll be able to click into a particular post from the home page. In its actions present form, this action has introduced an interesting side effect that could negatively impact our SEO rankings on search engines—the same entry can now be accessed from two URI's. For example, http://chapter6.example.com/content/view/id/1 and http://chapter6.example.com/quis-condimentum-tortor now bring up the same post. Fortunately, correcting this bug is fairly easy. Since the goal of our slugs is to provide more descriptive URI's, we'll simply block access to the view if a user tries to access it from the non-slugged URI. We'll do this by creating a new method called beforeViewAction() that takes the entry ID as a parameter and gets called right after the actionView() method is called. This private method will simply check the URI from CHttpRequest to determine how actionView was accessed and return a 404 if it's not through our beautiful slugs: private function beforeViewAction($id=NULL){// If we do not have an ID, consider it to be null, and throw a 404errorif ($id == NULL)throw new CHttpException(404,'The specified post cannot befound.');// Retrieve the HTTP Request$r = new CHttpRequest();// Retrieve what the actual URI$requestUri = str_replace($r->baseUrl, '', $r->requestUri);// Retrieve the route$route = '/' . $this->getRoute() . '/' . $id;$requestUri = preg_replace('/?(.*)/','',$requestUri);// If the route and the uri are the same, then a direct accessattempt was made, and we need to block access to the controllerif ($requestUri == $route)throw new CHttpException(404, 'The requested post cannot befound.');return str_replace($r->baseUrl, '', $r->requestUri);} Then right after our actionView starts, we can simultaneously set the correct return URL and block access to the content if it wasn't accessed through the slug as follows: Yii::app()->user->setReturnUrl($this->beforeViewAction($id)); Adding comments to our CMS with Disqus Presently, our content is only informative in nature—we have no way for our users to communicate with us what they thought about our entry. To encourage engagement, we can add a commenting system to our CMS to further engage with our readers. Rather than writing our own commenting system, we can leverage comment through Disqus, a free, third-party commenting system. Even through Disqus, comments are implemented in JavaScript and we can create a custom widget wrapper for it to display comments on our site. The steps are as follows: To begin with, log in to the Disqus account you created at the beginning of this article as outlined in the prerequisites section. Then, navigate to http://disqus.com/admin/create/ and fill out the form fields as prompted and as shown in the following screenshot: Then, add a disqus section to your protected/config/params.php file with your site shortname: 'disqus' => array('shortname' => 'ch6disqusexample',) Next, create a new widget in protected/components called DisqusWidget.php. This widget will be loaded within our view and will be populated by our Content model: class DisqusWidget extends CWidget {} Begin by specifying the public properties that our view will be able to inject into as follows: public $shortname = NULL; public $identifier = NULL; public $url = NULL; public $title = NULL; Then, overload the init() method to load the Disqus JavaScript callback and to populate the JavaScript variables with those populated to the widget as follows:public function init() public function init(){parent::init();if ($this->shortname == NULL)throw new CHttpException(500, 'Disqus shortname isrequired');echo "<div id='disqus_thread'></div>";Yii::app()->clientScript->registerScript('disqus', "var disqus_shortname = '{$this->shortname}';var disqus_identifier = '{$this->identifier}';var disqus_url = '{$this->url}';var disqus_title = '{$this->title}';/* * * DON'T EDIT BELOW THIS LINE * * */(function() {var dsq = document.createElement('script'); dsq.type ='text/javascript'; dsq.async = true;dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';(document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);})();");} Finally, within our themes/main/views/content/view.php file, load the widget as follows: <?php $this->widget('DisqusWidget', array('shortname' => Yii::app()->params['includes']['disqus']['shortname'],'url' => $this->createAbsoluteUrl('/'.$post->slug),'title' => $post->title,'identifier' => $post->id)); ?> Now, when you load any given post, Disqus comments will also be loaded with that post. Go ahead and give it a try! Searching for content Next, we'll implement a search method so that our users can search for posts. To do this, we'll implement an instance of CActiveDataProvider and pass that data to our themes/main/views/content/all.php view to be rendered and paginated: public function actionSearch(){$param = Yii::app()->request->getParam('q');$criteria = new CDbCriteria;$criteria->addSearchCondition('title',$param,'OR');$criteria->addSearchCondition('body',$param,'OR');$dataprovider = new CActiveDataProvider('Content', array('criteria'=>$criteria,'pagination' => array('pageSize' => 5,'pageVar'=>'page')));$this->render('//content/all', array('dataprovider' => $dataprovider));} Since our view file already exists, we can now search for content in our CMS. Managing content Next, we'll implement a basic set of management tools that will allow us to create, update, and delete entries: We'll start by defining our loadModel() method and the actionDelete() method: private function loadModel($id=NULL){if ($id == NULL)throw new CHttpException(404, 'No category with that IDexists');$model = Content::model()->findByPk($id);if ($model == NULL)throw new CHttpException(404, 'No category with that IDexists');return $model;}public function actionDelete($id){$this->loadModel($id)->delete();$this->redirect($this->createUrl('content/admin'));} Next, we can implement our admin view, which will allow us to view all the content in our system and to create new entries. Be sure to copy the themes/main/views/content/admin.php file from the project resources folder into your project before using this view: public function actionAdmin(){$model = new Content('search');$model->unsetAttributes();if (isset($_GET['Content']))$model->attributes = $_GET;$this->render('admin', array('model' => $model));} Finally, we'll implement a save view to create and update entries. Saving content will simply pass it through our content model's validation rules. The only override we'll be adding is ensuring that the author is assigned to the user editing the entry. Before using this view, be sure to copy the themes/main/views/content/save.php file from the project resources folder into your project: public function actionSave($id=NULL){if ($id == NULL)$model = new Content;else$model = $this->loadModel($id);if (isset($_POST['Content'])){$model->attributes = $_POST['Content'];$model->author_id = Yii::app()->user->id;if ($model->save()){Yii::app()->user->setFlash('info', 'The articles wassaved');$this->redirect($this->createUrl('content/admin'));}}$this->render('save', array('model' => $model));} At this point, you can now log in to the system using the credentials provided in the following table and start managing entries: Username Password user1@example.com test user2@example.com test Summary In this article, we dug deeper into Yii framework by manipulating our CUrlManager class to generate completely dynamic and clean URIs. We also covered the use of Yii's built-in theming to dynamically change the frontend appearance of our site by simply changing a configuration value. Resources for Article: Further resources on this subject: Creating an Extension in Yii 2 [Article] Yii 1.1: Using Zii Components [Article] Agile with Yii 1.1 and PHP5: The TrackStar Application [Article]
Read more
  • 0
  • 0
  • 2435

article-image-introduction-spring-web-application-no-time
Packt
10 Sep 2015
8 min read
Save for later

Introduction to Spring Web Application in No Time

Packt
10 Sep 2015
8 min read
 Many official Spring tutorials have both a Gradle build and a Maven build, so you will find examples easily if you decide to stick with Maven. Spring 4 is fully compatible with Java 8, so it would be a shame not to take advantage of lambdas to simplify our code base. In this article by Geoffroy Warin, author of the book Mastering Spring MVC 4, we will see some Git commands. It's a good idea to keep track of your progress and commit when you are in a stable state. (For more resources related to this topic, see here.) Getting started with Spring Tool Suite One of the best ways to get started with Spring and discover the numerous tutorials and starter projects that the Spring community offers is to download Spring Tool Suite (STS). STS is a custom version of eclipse designed to work with various Spring projects, as well as Groovy and Gradle. Even if, like me, you have another IDE that you would rather work with, we recommend that you give STS a shot because it gives you the opportunity to explore Spring's vast ecosystem in a matter of minutes with the "Getting Started" projects. So, let's visit https://Spring.io/tools/sts/all and download the latest release of STS. Before we generate our first Spring Boot project we will need to install the Gradle support for STS. You can find a Manage IDE Extensions button on the dashboard. You will then need to download the Gradle Support software in the Language and framework tooling section. Its recommend installing the Groovy Eclipse plugin along with the Groovy 2.4 compiler, as shown in the following screenshot. These will be needed later in this article when we set up acceptance tests with geb: We now have two main options to get started. The first option is to navigate to File | New | Spring Starter Project, as shown in the following screenshot. This will give you the same options as http://start.Spring.io, embedded in your IDE: The second way is to navigate to File | New | Import Getting Started Content. This will give you access to all the tutorials available on Spring.io. You will have the choice of working with either Gradle or Maven, as shown in the following screenshot: You can also check out the starter code to follow along with the tutorial, or get the complete code directly. There is a lot of very interesting content available in the Getting Started Content. It will demonstrate the integration of Spring with various technologies that you might be interested in. For the moment, we will generate a web project as shown in the preceding image. It will be a Gradle application, producing a JAR file and using Java 8. Here is the configuration we want to use: Property Value Name masterSpringMvc Type Gradle project Packaging Jar Java version 1.8 Language Java Group masterSpringMvc Artifact masterSpringMvc Version 0.0.1-SNAPSHOT Description Be creative! Package masterSpringMvc On the second screen you will be asked for the Spring Boot version you want to use and the the dependencies that should be added to the project. At the time of writing this, the latest version of Spring boot was 1.2.5. Ensure that you always check out the latest release. The latest snapshot version of Spring boot will also be available by the time you read this. If Spring boot 1.3 isn't released by then, you can probably give it a shot. One of its big features is the awesome devs tools. Refer to https://spring.io/blog/2015/06/17/devtools-in-spring-boot-1-3 for more details. At the bottom the configuration window you will see a number of checkboxes representing the various boot starter libraries. These are dependencies that can be appended to your build file. They provide autoconfigurations for various Spring projects. We are only interested in Spring MVC for the moment, so we will check only the Web checkbox. A JAR for a web application? Some of you might find it odd to package your web application as a JAR file. While it is still possible to use WAR files for packaging, it is not always the recommended practice. By default, Spring boot will create a fat JAR, which will include all the application's dependencies and provide a convenient way to start a web server using Java -jar. Our application will be packaged as a JAR file. If you want to create a war file, refer to http://spring.io/guides/gs/convert-jar-to-war/. Have you clicked on Finish yet? If you have, you should get the following project structure: We can see our main class MasterSpringMvcApplication and its test suite MasterSpringMvcApplicationTests. There are also two empty folders, static and templates, where we will put our static web assets (images, styles, and so on) and obviously our templates (jsp, freemarker, Thymeleaf). The last file is an empty application.properties file, which is the default Spring boot configuration file. It's a very handy file and we'll see how Spring boot uses it throughout this article. The last is build.gradle file, the build file that we will detail in a moment. If you feel ready to go, run the main method of the application. This will launch a web server for us. To do this, go to the main method of the application and navigate to Run as | Spring Application in the toolbar either by right-clicking on the class or clicking on the green play button in the toolbar. Doing so and navigating to http://localhost:8080 will produce an error. Don't worry, and read on. Now we will show you how to generate the same project without STS, and we will come back to all these files. Getting started with IntelliJ IntelliJ IDEA is a very popular tool among Java developers. For the past few years I've been very pleased to pay Jetbrains a yearly fee for this awesome editor. IntelliJ also has a way of creating Spring boot projects very quickly. Go to the new project menu and select the Spring Initializr project type: This will give us exactly the same options as STS. You will need to import the Gradle project into IntelliJ. we recommend generating the Gradle wrapper first (refer to the following Gradle build section). If needed, you can reimport the project by opening its build.gradle file again. Getting started with start.Spring.io Go to http://start.Spring.io to get started with start.Spring.io. The system behind this remarkable Bootstrap-like website should be familiar to you! You will see the following screenshot when you go to the previously mentioned link: Indeed, the same options available with STS can be found here. Clicking on Generate Project will download a ZIP file containing our starter project. Getting started with the command line For those of you who are addicted to the console, it is possible to curl http://start.Spring.io. Doing so will display instructions on how to structure your curl request. For instance, to generate the same project as earlier, you can issue the following command: $ curl http://start.Spring.io/starter.tgz -d name=masterSpringMvc -d dependencies=web -d language=java -d JavaVersion=1.8 -d type=gradle-project -d packageName=masterSpringMvc -d packaging=jar -d baseDir=app | tar -xzvf - % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 1255 100 1119 100 136 1014 123 0:00:01 0:00:01 --:--:-- 1015 x app/ x app/src/ x app/src/main/ x app/src/main/Java/ x app/src/main/Java/com/ x app/src/main/Java/com/geowarin/ x app/src/main/resources/ x app/src/main/resources/static/ x app/src/main/resources/templates/ x app/src/test/ x app/src/test/Java/ x app/src/test/Java/com/ x app/src/test/Java/com/geowarin/ x app/build.Gradle x app/src/main/Java/com/geowarin/AppApplication.Java x app/src/main/resources/application.properties x app/src/test/Java/com/geowarin/AppApplicationTests.Java And viola! You are now ready to get started with Spring without leaving the console, a dream come true. You might consider creating an alias with the previous command, it will help you prototype the Spring application very quickly. Summary In this article, we leveraged Spring Boot's autoconfiguration capabilities to build an application with zero boilerplate or configuration files. We configured Spring Boot tool suite, IntelliJ,and start.spring.io and how to configure it! Resources for Article: Further resources on this subject: Welcome to the Spring Framework[article] Mailing with Spring Mail[article] Creating a Spring Application [article]
Read more
  • 0
  • 0
  • 2433

article-image-getting-started-drupal-6-panels
Packt
20 Aug 2010
7 min read
Save for later

Getting Started with Drupal 6 Panels

Packt
20 Aug 2010
7 min read
(For more resources on Drupal, see here.) Introduction Drupal Panels are distinct pieces of rectangular content that create a custom layout of the page—where different Panels are more visible and presentable as a structured web page. Panels is a freely-distributed, open source module developed for Drupal 6. With Panels, you can display various content in a customizable grid layout on one page. Each page created by Panels can include a unique structure and content. Using the drag-and-drop user interface, you select a design for the layout and position various kinds of content (or add custom content) within that layout. Panels integrates with other Drupal modules like Views and CCK. Permissions, deciding which users can view which elements, are also integrated into Panels. You can even override system pages such as the display of keywords (taxonomy) and individual content pages (nodes). In the next section, we will see what the Panels can actually do, as defined on drupal.org: http://drupal.org/project/panels. Basically, Panels will help you to arrange a large content on a single page. While Panels can be used to arrange a lot of content on a single page, it is equally useful for small amounts of related content and/or teasers. Panels support styles, which control how individual content's panes, regions within a Panel, and the entire Panels will be rendered. While Panels ship with few styles, styles can be provided as plugins by modules, as well as by themes: The User Interface is nice for visually designing a layout, but a real HTML guru doesn't want the somewhat weighty HTML that this will create. Modules and themes can provide custom layouts that can fit a designer's exacting specifications, but still allow the site builders to place content wherever they like. Panels include a pluggable caching mechanism: a single cache type is included, the 'simple' cache, which is time-based. Since most sites have very specific caching needs based upon the content and traffic patterns, this system was designed to let sites that need to devise their own triggers for cache clearing and implement plugins that will work with Panels. A cache mechanism can be defined for each pane or region with the Panel page. Simple caching is a time-based cache. This is a hard limit, and once cached, it will remain that way until the time limit expires. If "arguments" are selected, this content will be cached per individual argument to the entire display; if "contexts" are selected, this content will be cached per unique context in the pane or display; if "neither", there will be only one cache for this pane. Panels can also be cached as a whole, meaning the entire output of the Panels can be cached, or individual content panes that are heavy, like large views, can be cached. Panels can be integrated with the Drupal module Organic Groups through the #og_Panels module to allow individual groups to have their own customized layouts. Panels integrates Views to allow administrators to add any view as content. We will discuss Module Integration in the coming recipes. Shown in the previous screenshot is one of the example sites that use Panels 3 for their home page (http://concernfast.org). The home page is built using a custom Panels 3 layout with a couple of dedicated Content types that are used to build nodes to drop into the various Panels areas. The case study can be found at: http://drupal.org/node/629860. Panels arrange your site content into an easy navigational pattern, which can be clearly seen in the following screenshot. There are several terms often used within Panels that administrators should become familiar with as we will be using the same throughout the recipes. The common terms in Panels are: Panels page: The page that will display your Panels. This could be the front page of a site, a news page, and so on. These pages are given a path just like any other node. Panels: A container for content. A Panel can have several pieces of content within it, and can be styled. Pane: A unit of content in a Panel. This can be a node, view, arbitrary HTML code, and so on. Panes can be shifted up and down within a Panel and moved from one Panel to another. Layout: Provides a pre-defined collection of Panels that you can select from. A layout might have two columns, a header, footer, or three columns in the middle, or even seven Panels stacked like bricks. Setting up Ctools and Panels We will now set up Ctools, which is required for Panels. "Chaos tools" is a centralized library, which is used by the most powerful modules of Drupal Panels and views. Most functions in Panels are inherited from the chaos library. Getting ready Download the Panels modules for the Drupal website: http://drupal.org/project/Panels You would need Ctools as a dependency module, which can be downloaded from: http://drupal.org/project/ctools How to do it... Upload both the files, Ctools and Panels, into /sites/all/modules. It is always a best practice to keep the installed modules separate from the "core" (the files that install with Drupal) into the /sites/all/modules folder. This makes it easy to upgrade the modules at a later stage when your site becomes complex and has too many modules. Go to the modules page in admin (Admin| Site Building | Modules) and enable Ctools, then enable Panels. Go to permissions (Admin | User Management | Permissions) and give site builders permission to use Panels. Enable the Page manager module in the Chaos tools suite. This module enables the page manager for Panels. To integrate views with Panels, enable the Views content panes module too. We will discuss more about views later on. Enable Panels and set the permissions. You will need to enable Panel nodes, the Panel module, and Mini panels too (as shown in the following screenshot) as we will use the same in our advanced recipes. Go to administer by module in the Site building | Modules. Here, you find the Panels User Interface. There is more Chaos tools suite includes the following tools that form the base of the Panels module. You do not need to go into the details of it to use Panels but it is good to know what it includes. This is the powerhouse that makes Panels the most efficient tool to design complex layouts: Plugins—tools to make it easy for modules to let other modules implement plugins from .inc files. Exportables—tools to make it easier for modules to have objects that live in database or live in code, such as 'default views'. AJAX responder—tools to make it easier for the server to handle AJAX requests and tell the client what to do with them. Form tools—tools to make it easier for forms to deal with AJAX. Object caching—tool to make it easier to edit an object across multiple page requests and cache the editing work. Contexts—the notion of wrapping objects in a unified wrapper and providing an API to create and accept these contexts as input. Modal dialog—tool to make it simple to put a form in a modal dialog. Dependent—a simple form widget to make form items appear and disappear based upon the selections in another item. Content—pluggable Content types used as panes in Panels and other modules like Dashboard. Form wizard—an API to make multi-step forms much easier. CSS tools—tools to cache and sanitize CSS easily to make user input CSS safe. How it works... Now, we have our Panels UI ready to generate layouts. We will discuss each of them in the following recipes. The Panels dashboard will help you to generate the layouts for Drupal with ease.
Read more
  • 0
  • 0
  • 2432

article-image-building-facebook-clone-using-ruby
Packt
25 Aug 2010
8 min read
Save for later

Building the Facebook Clone using Ruby

Packt
25 Aug 2010
8 min read
(For more resources on Ruby, see here.) This is the largest clone and has many components. Some of the less interesting parts of the code are not listed or described here. To get access to the full source code please go to http://github.com/sausheong/saushengine. Configuring the clone We use a few external APIs in Colony so we need to configure our access to these APIs. In a Colony all these API keys and settings are stored in a Ruby file called config.rb as below. S3_CONFIG = {}S3_CONFIG['AWS_ACCESS_KEY'] = '<AWS ACCESS KEY>'S3_CONFIG['AWS_SECRET_KEY'] = '<AWS SECRET KEY>'RPX_API_KEY = '<RPX API KEY>' Modeling the data You will find a large number of classes and relationships in this article. The following diagram shows how the clone is modeled: User The first class we look at is the User class. There are more relationships with other classes and the relationship with other users follows that of a friends model rather than a followers model. class User include DataMapper::Resource property :id, Serial property :email, String, :length => 255 property :nickname, String, :length => 255 property :formatted_name, String, :length => 255 property :sex, String, :length => 6 property :relationship_status, String property :provider, String, :length => 255 property :identifier, String, :length => 255 property :photo_url, String, :length => 255 property :location, String, :length => 255 property :description, String, :length => 255 property :interests, Text property :education, Text has n, :relationships has n, :followers, :through => :relationships, :class_name => 'User', :child_key => [:user_id] has n, :follows, :through => :relationships, :class_name => 'User', :remote_name => :user, :child_key => [:follower_id] has n, :statuses belongs_to :wall has n, :groups, :through => Resource has n, :sent_messages, :class_name => 'Message', :child_key => [:user_id] has n, :received_messages, :class_name => 'Message', :child_key => [:recipient_id] has n, :confirms has n, :confirmed_events, :through => :confirms, :class_name => 'Event', :child_key => [:user_id], :date.gte => Date.today has n, :pendings has n, :pending_events, :through => :pendings, :class_name => 'Event', :child_key => [:user_id], :date.gte => Date.today has n, :requests has n, :albums has n, :photos, :through => :albums has n, :comments has n, :activities has n, :pages validates_is_unique :nickname, :message => "Someone else has taken up this nickname, try something else!" after :create, :create_s3_bucket after :create, :create_wall def add_friend(user) Relationship.create(:user => user, :follower => self) end def friends (followers + follows).uniq end def self.find(identifier) u = first(:identifier => identifier) u = new(:identifier => identifier) if u.nil? return u end def feed feed = [] + activities friends.each do |friend| feed += friend.activities end return feed.sort {|x,y| y.created_at <=> x.created_at} end def possessive_pronoun sex.downcase == 'male' ? 'his' : 'her' end def pronoun sex.downcase == 'male' ? 'he' : 'she' end def create_s3_bucket S3.create_bucket("fc.#{id}") end def create_wall self.wall = Wall.create self.save end def all_events confirmed_events + pending_events end def friend_events events = [] friends.each do |friend| events += friend.confirmed_events end return events.sort {|x,y| y.time <=> x.time} end def friend_groups groups = [] friends.each do |friend| groups += friend.groups end groups - self.groups endend As mentioned in the design section above, the data used in Colony is user-centric. All data in Colony eventually links up to a user. A user has following relationships with other models: A user has none, one, or more status updates A user is associated with a wall A user belongs to none, one, or more groups A user has none, one, or more sent and received messages A user has none, one, or more confirmed and pending attendances at events A user has none, one, or more user invitations A user has none, one, or more albums and in each album there are none, one, or more photos A user makes none, one, or more comments A user has none, one, or more pages A user has none, one, or more activities Finally of course, a user has one or more friends Once a user is created, there are two actions we need to take. Firstly, we need to create an Amazon S3 bucket for this user, to store his photos. after :create, :create_s3_bucketdef create_s3_bucket S3.create_bucket("fc.#{id}")end We also need to create a wall for the user where he or his friends can post to. after :create, :create_walldef create_wall self.wall = Wall.create self.saveend Adding a friend means creating a relationship between the user and the friend. def add_friend(user) Relationship.create(:user => user, :follower => self) end Colony treats the following relationship as a friends relationship. The question here is who will initiate the request to join? This is why when we ask the User object to give us its friends, it will add both followers and follows together and return a unique array representing all the user's friends. def friends (followers + follows).uniqend In the Relationship class, each time a new relationship is created, an Activity object is also created to indicate that both users are now friends. class Relationship include DataMapper::Resource property :user_id, Integer, :key => true property :follower_id, Integer, :key => true belongs_to :user, :child_key => [:user_id] belongs_to :follower, :class_name => 'User', :child_key => [:follower_id] after :save, :add_activity def add_activity Activity.create(:user => user, :activity_type => 'relationship', :text => "<a href='/user/#{user.nickname}'>#{user.formatted_name}</a> and <a href='/user/#{follower.nickname}'>#{follower.formatted_name}</a> are now friends.") end end Finally we get the user's news feed by taking the user's activities and going through each of the user's friends, their activities as well. def feed feed = [] + activities friends.each do |friend| feed += friend.activities end return feed.sort {|x,y| y.created_at <=> x.created_at}end Request We use a simple mechanism for users to invite other users to be their friends. The mechanism goes like this: Alice identifies another Bob whom she wants to befriend and sends him an invitation This creates a Request class which is then attached to Bob When Bob approves the request to be a friend, Alice is added as a friend (which is essentially making Alice follow Bob, since the definition of a friend in Colony is either a follower or follows another user) class Request include DataMapper::Resource property :id, Serial property :text, Text property :created_at, DateTime belongs_to :from, :class_name => User, :child_key => [:from_id] belongs_to :user def approve self.user.add_friend(self.from) endend Message Messages in Colony are private messages that are sent between users of Colony. As a result, messages sent or received are not tracked as activities in the user's activity feed. class Message include DataMapper::Resource property :id, Serial property :subject, String property :text, Text property :created_at, DateTime property :read, Boolean, :default => false property :thread, Integer belongs_to :sender, :class_name => 'User', :child_key => [:user_id] belongs_to :recipient, :class_name => 'User', :child_key => [:recipient_id] end A message must have a sender and a recipient, both of which are users. has n, :sent_messages, :class_name => 'Message', :child_key => [:user_id]has n, :received_messages, :class_name => 'Message', :child_key => [:recipient_id] The read property tells us if the message has been read by the recipient, while the thread property tells us how to group messages together for display. Album An activity is logged, each time an album is created. class Album include DataMapper::Resource property :id, Serial property :name, String, :length => 255 property :description, Text property :created_at, DateTime belongs_to :user has n, :photos belongs_to :cover_photo, :class_name => 'Photo', :child_key => [:cover_photo_id] after :save, :add_activity def add_activity Activity.create(:user => user, :activity_type => 'album', :text => "<a href='/user/#{user.nickname}'>#{user.formatted_name}</a> created a new album <a href='/album/#{self.id}'>#{self.name}</a>") end end
Read more
  • 0
  • 0
  • 2431
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-integrating-websphere-extreme-scale-data-grid-relational-database-part-2
Packt
18 Nov 2009
6 min read
Save for later

Integrating Websphere eXtreme Scale Data Grid with Relational Database: Part 2

Packt
18 Nov 2009
6 min read
Removal versus eviction Setting an eviction policy on a BackingMap makes more sense now that we're using a Loader. Imagine that our cache holds only a fraction of the total data stored in the database. Under heavy load, the cache is constantly asked to hold more and more data, but it operates at capacity. What happens when we ask the cache to hold on to one more payment? The BackingMap needs to remove some payments in order to make room for more. BackingMaps have three basic eviction policies: LRU (least-recently used), LFU (least-frequently used), and TTL (time-to-live). Each policy tells the BackingMap which objects should be removed in order to make room for more. In the event that an object is evicted from the cache, its status in the database is not changed. With eviction, objects enter and leave the cache due to cache misses and evictions innumerable times, and their presence in the database remains unchanged. The only thing that affects an object in the database is an explicit call to change (either persist or merge) or remove it as per our application. Removal means the object is removed from the cache, and the Loader executes the delete from SQL to delete the corresponding row(s) from the database. Your data is safe when using evictions. The cache simply provides a window into your data. A remove operation explicitly tells both ObjectGrid and the database to delete an object. Write-through and write-behind Getting back to the slow down due to the Loader configuration, by default, the Loader uses write-through behavior: Now we know the problem. Write-through behavior wraps a database transaction for every write! For every ObjectGrid transaction, we execute one database transaction. On the up side, every object assuredly reaches the database, provided it doesn't violate any relational constraints. Despite this harsh reaction to write-through behavior, it is essential for objects that absolutely must get to the database as fast as possible. The problem is that we hit the database for every write operation on every BackingMap. It would be nice not to incur the cost of a database transaction every time we write to the cache. Write-behind behavior gives us the help we need. Write-behind gives us the speed of an ObjectGrid transaction and the flexibility that comes with storing data in a database: Each ObjectGrid transaction is now separate from a database transaction. BackingMap now has two jobs. The first job is to store our objects as it always does. The second job is to send those objects to the JPAEntityLoader. The JPAEntityLoader then generates SQL statements to insert the data into a database. We configured each BackingMap with its own JPAEntityLoader. Each BackingMap requires its own Loader because each Loader is specific to a JPA entity class. The relationship between JPAEntityLoader and a JPA entity is established when the BackingMap is initialized. The jpaTxCallback we specified in the ObjectGrid configuration coordinates the transactions between ObjectGrid and a JPA EntityManager. In a write-through situation, our database transactions are only as large as our ObjectGrid transactions. Update one object in the BackingMap and one object is written to the database. With write-behind, our ObjectGrid transaction is complete, and our objects are put in a write-behind queue map. That queue map does not immediately synchronize with the database. It waits for some specified time or for some number of updates, to write out its contents to the database: We configure the database synchronization conditions with the setWriteBehind("time;conditions") method on a BackingMap instance. Programmatically the setWriteBehind method looks like this: BackingMap paymentMap = grid.getMap("Payment");paymentMap.setLoader(new JPAEntityLoader());paymentMap.setWriteBehind("T120;C5001"); The same configuration in XML looks like this: <backingMap name="Payment" writeBehind="T120;C5001"pluginCollectionRef="Payment" /> Enabling write-behind is as simple as that. The setWriteBehind method takes one string parameter, but it is actually a two-in-one. At first, the T part is the time in seconds between syncing with the database. Here, we set the payment BackingMap to wait two minutes between syncs. The C part indicates the number (count) of changes made to the BackingMap that triggers a database sync. Between these two parameters, the sync occurs on a whichever comes first basis. If two minutes elapse between syncs, and only 400 changes (persists, merges, or removals) have been put in the write-behind queue map, then those 400 changes are written out to the database. If only 30 seconds elapse, but we reach 5001 changes, then those changes will be written to the database. ObjectGrid does not guarantee that the sync will take place exactly when either of those conditions is met. The sync may happen a little bit before (116 seconds or 4998 changes) or a little bit later (123 seconds or 5005 changes). The sync will happen as close to those conditions as ObjectGrid can reasonably do it. The default value is "T300;C1000". This syncs a BackingMap to the database every five minutes, or 1000 changes to the BackingMap. This default is specified either with the string "T300;C1000" or with an empty string (" "). Omitting either part of the sync parameters is acceptable. The missing part will use the default value. Calling setWriteBehind("T60") has the BackingMap sync to the database every 60 seconds, or 1000 changes. Calling setWriteBehind("C500") syncs every five minutes, or 500 changes. Write-behind behavior is enabled if the setWriteBehind method is called with an empty string. If you do not want write-behind behavior on a BackingMap, then do not call the setWriteBehind method at all. A great feature of the write-behind behavior is that an object changed multiple times in the cache is only written in its final form to the database. If a payment object is changed in three different ObjectGrid transactions, the SQL produced by the JPAEntityLoader will reflect the object's final state before the sync. For example: entityManager.getTransaction().begin();Payment payment = createPayment(line, batch);entityManager.getTransaction().commit();some time later...entityManager.getTransaction().begin();payment.setAmount(new BigDecimal("44.95"));entityManager.getTransaction().commit();some time later...entityManager.getTransaction().begin();payment.setPaymentType(PaymentType.REAUTH);entityManager.getTransaction().commit(); With write-through behavior, this would produce the following SQL: insert into payment (id, amount, batch_id, card_id, payment_type) values (12345, 75.00, 31, 6087, 'AUTH');update payment set (id, amount, batch_id, card_id, payment_type) values (12345, 44.95, 31, 6087, 'AUTH') where id = 12345;update payment set (id, amount, batch_id, card_id, payment_type) values (12345, 44.95, 31, 6087, 'REAUTH') where id = 12345; Now that we're using write-behind, that same application behavior produces just one SQL statement: insert into payment (id, amount, batch_id, card_id, payment_type) values (12345, 44.95, 31, 6087, 'REAUTH');
Read more
  • 0
  • 0
  • 2427

article-image-designer-friendly-templates
Packt
24 May 2013
11 min read
Save for later

Designer Friendly Templates

Packt
24 May 2013
11 min read
(For more resources related to this topic, see here.) Designer friendly templates (Simple) Inherent to web applications is this breach in technology. We need to combine business logic on the server with HTML pages and JavaScript on the client side. The nicely encapsulated server-side business logic then hits a client-side technology that really was intended to structure pages of text. You somehow need to weave the backend functionality into these web pages. Countless approaches exist that try to bridge the two. Lift is also unique in this regard in that it lets you create valid HTML5 or XHTML templates that contain absolutely no business functionality, yet it manages to combine the two in an inspiring and clear way. Getting ready Again, we will use the example application from the Preparing your development environment (Simple) recipe to talk about the different concepts. You will find the templates under the webapp directory inside src/main. If you open them, you will see they're plain and simple HTML files. It's easy for designers to edit them with the tools they know. How to do it... Lift's page templates are valid XHTML or HTML5 documents that are parsed and treated as NodeSeq documents (XML, basically) until served to the browser. The standard path for everything webby is src/main/webapp inside your project. Say you enter a URL liftapp.com/examples/templates and provide the user with access to this page (see the SiteMap task for details), Lift will search the templates.html page inside the examples directory located at src/main/webapp. That's the normal case. Of course you can rewrite URLs and point to something entirely different, but let's now consider the common case. Let's look at a simple template for the example applications' home page, http://localhost:8080: <!DOCTYPE html><html><head><meta content="text/html; charset=UTF-8"http-equiv="content-type" ></meta><title>Home</title></head><body class="lift:content_id=main"><div id="main"data-lift="surround?with=default;at=content"><h2>Welcome to your project!</h2><p><span data-lift="helloWorld.howdy">Welcome to your Lift app at<span id="time">Time goes here</span></span></p></div></body></html> Granted, this page doesn't do much, but that's all there is to this page. In most applications you have some common parts on a page and some that change content. It's easy to define these hierarchies of templates. In your page template you define by which parent template you want it to be surrounded with and at which place. The parent template itself can also be surrounded by another template, and so on. This is a useful feature to extract common parts of a page into base templates and build on top of these to finally define the structure and surrounding chrome of your pages. The parent template for this page is called default.html and is searched for in the templates-hidden folder. Any file that is embedded into a page is searched underneath templates-hidden. We omit the CSS and some of the Boilerplate and just show the interesting parts of the parent template's content: <body><div class="container">...<div class="column span-6 colborder sidebar"><hr class="space" ><span data-lift="Menu.builder?group=main"></span><hr class="space" ><span data-lift="Menu.builder?group=examples"></span><hr class="space" ><span data-lift="Menu.builder?group=PostingUsers"></span><div data-lift="Msgs?showAll=true"></"></"></div><hr class="space" ></div><div class="column span-17 last"><div id="content">The main content goes here</div></div>...</body> This template defines a sidebar and places our menus there. It defines a place where messages are shown that are sent from Lift with its S.notice, S.warning, and S.error methods. And finally, it defines an ID (content) that marks the element receiving the page content. How it works... Let's walk through the code snippet given in the preceding section and see how the pieces fit together. <body class="lift:content_id=main"> In the page template we tell Lift where the template actually starts. You can create complete, valid HTML pages and then make Lift cut the central piece out for its rendering process, and your designers can still work with complete pages that they can process in isolation from the rest. This line tells Lift that the content starts with the element with the ID, main. The next thing we do is to define a parent template that we use to surround the page with. This way, we define essential page layout markup only once and include it everywhere it's needed. Here's how you surround a page with a parent template: <div id="main" data-lift="lift:surround?with=default;at=content">… your content here…</div> In the class attribute of the div element you call the surround snippet and hand it over the with=default and at=content parameters. The surround snippet now knows that it should find a template called default.html and insert the content of this div element into the parent template at the point defined by the ID, content. Speaking of snippets, it is a mechanism to process parts of your HTML files the same way for built-in snippets as it is for your own. Snippets are pieces of logic that get weaved into the markup. We'll get to this integral part of Lift development really soon. Lift templates are the files that are not defined in the SiteMap. They are located at a subfolder called templates-hidden. They cannot be accessed directly from the URL, but only through code by directly opening it or through the surround-and-embed mechanisms inside other templates or pages. Have a look at the parent template default.html shown previously. This file, along with the other files we discuss here, is available in the source code that comes with the book. It's a standard HTML5 file defining some styles and finally defining a div element to bind the child content: <div id="content">The main content will get bound here</div> Lift will remove the text inside the DIV and replace it with the actual content, as shown in the following screenshot: A few other things at the top of the template are worth noting: <style class="lift:CSS.blueprint"></style><style class="lift:CSS.fancyType"></style><script id="jquery" src = "/classpath/jquery.js""type="text/javascript"></script> Lift comes bundled with the Blueprint CSS framework (http://blueprintcss.org/) and a version of jQuery (http://jquery.com/). It's intended to make it easier for you to start, but by no means are you bound to using Blueprint or the included jQuery version. Just use your own CSS framework (there's a recipe on using Twitter's Bootstrap) or jQuery where it makes sense. For instance, to use a hosted version of the latest jQuery library, you would replace the script tag from the preceding code snippet with the following: <script type="text/javascript" src = "http://code.jquery.com/jquery-1.8.2.min.js"></script> Lift provides some standard snippets which you can use to build up your pages. The default.html template utilizes a snippet to render a menu and another snippet to place messages on the page: <span data-lift="Menu.builder?group=main"></span> When you define the element that encloses the menu, Lift will automatically render it. If you omit the group parameter, all menu entries will be rendered. Having that parameter will restrict the menu only to the items within that group. You can assign a menu group (called LocGroup) in the SiteMap you defined in the Boot class. <div data-lift="Msgs?showAll=true"></div> This snippet call will render messages that are produced by the backend application in this spot. There's more... We will now have a look at execution order. In normal execution mode, Lift first evaluates the outer snippets and then layer by layer moves to the inner snippets. If you want to include the result of some inner snippet evaluations to the input of the outer snippets, you need to reverse that process. For that very reason, Lift provides a snippet parameter, eager_eval=true, that you add to the outer snippet: <div data-lift="ImOuter?eager_eval=true">...<div data-lift="ImInner">...</div>...</div> Adding that parameter causes Lift to first evaluate the inner snippet and then add the result of the inner snippet call to the input that is processed by the outer snippet. You can also embed templates into your page or other templates. That's the opposite operation of surrounding a page, but equally simple. In your page, use the embed snippet to embed a template: <div data-lift="embed?what=/examples/templates/awesome"></div> The what parameter defines the path to the template, which is searched for within the webapp directory. We will now see the programmatic embedding of templates. You can easily search a template and process it programmatically. In that case you need to specify the templates-hidden directory; that way you are able to access top-level pages as well. val ns:Box[NodeSeq] = S.runTemplate(List("templates-hidden","examples", "templates", "awesome")) Please see the EmbedTemplates snippet for an example of how to programmatically access templates and apply transformations before embedding it. <div data-lift="EmbedTemplate?what=/examples/templates/awesome"></div> As you can see, our own templates are called just the same way as Lift's default templates, and they can do the same things. Programmatic access to templates is useful, for instance when you want to send HTML e-mails. Inside the mail sender you would grab the template, process it (see CSS Selectors), and send the complete HTML to the recipient. There are a myriad more reasons or use cases when you want to access your templates from your Scala code. Just keep in the back of your mind that you can do it. The S.runTemplate method will fetch the template and process it. That means it will look for any embedded Lift snippet calls and execute them. These snippet calls could potentially embed other templates recursively. If you do not want the template to be processed, you can retrieve it like this: val tpl:Box[NodeSeq] = Templates(List("templates-hidden", "examples","templates", "awesome") Lift templates are very powerful, and they have to be. They are at the basis of every web application and need to handle a lot of different scenarios. The separation between the markup and the logic keeps the templates clean and prohibits your designers from breaking code. It might take a while to adopt to this template style if you come from a framework that mixes markup and code. We believe, especially in larger applications, you will soon see the benefits of a clear separation and encapsulation of your logic in reusable pieces. Speaking of reusable pieces, let's head over to snippets, Lift's way to plug functionality into templates. The Lift wiki offers further information about templates and binding at the following links: http://www.assembla.com/spaces/liftweb/wiki/Designer_Friendly_ Templates http://www.assembla.com/spaces/liftweb/wiki/Templates_and_ Binding Summary In this article, we learned about designer friendly templates. Resources for Article : Further resources on this subject: RESTful Web Service Implementation with RESTEasy [Article] Spring Roo 1.1: Working with Roo-generated Web Applications [Article] Deploying your Applications on WebSphere Application Server 7.0 (Part 1) [Article]
Read more
  • 0
  • 0
  • 2426

article-image-smart-features-improve-your-efficiency
Packt
15 Sep 2015
11 min read
Save for later

Smart Features to Improve Your Efficiency

Packt
15 Sep 2015
11 min read
In this article by Denis Patin and Stefan Rosca authors of the book WebStorm Essentials, we are going to deal with a number of really smart features that will enable you to fundamentally change your approach to web development and learn how to gain maximum benefit from WebStorm. We are going to study the following in this article: On-the-fly code analysis Smart code features Multiselect feature Refactoring facility (For more resources related to this topic, see here.) On-the-fly code analysis WebStorm will preform static code analysis on your code on the fly. The editor will check the code based on the language used and the rules you specify and highlight warnings and errors as you type. This is a very powerful feature that means you don't need to have an external linter and will catch most errors quickly thus making a dynamic and complex language like JavaScript more predictable and easy to use. Runtime error and any other error, such as syntax or performance, are two things. To investigate the first one, you need tests or a debugger, and it is obvious that they have almost nothing in common with the IDE itself (although, when these facilities are integrated into the IDE, such a synergy is better, but that is not it). You can also examine the second type of errors the same way but is it convenient? Just imagine that you need to run tests after writing the next line of code. It is no go! Won't it be more efficient and helpful to use something that keeps an eye on and analyzes each word being typed in order to notify about probable performance issues and bugs, code style and workflow issues, various validation issues, warn of dead code and other likely execution issues before executing the code, to say nothing of reporting inadvertent misprints. WebStorm is the best fit for it. It performs a deep-level analysis of each line, each word in the code. Moreover, you needn't break off your developing process when WebStorm scans your code; it is performed on the fly and thus so called: WebStorm also enables you to get a full inspection report on demand. For getting it, go to the menu: Code | Inspect Code. It pops up the Specify Inspection Scope dialog where you can define what exactly you would like to inspect, and click OK. Depending on what is selected and of what size, you need to wait a little for the process to finish, and you will see the detailed results where the Terminal window is located: You can expand all the items, if needed. To the right of this inspection result list you can see an explanation window. To jump to the erroneous code lines, you can simply click on the necessary item, and you will flip into the corresponding line. Besides simple indicating where some issue is located, WebStorm also unequivocally suggests the ways to eliminate this issue. And you even needn't make any changes yourself—WebStorm already has quick solutions, which you need just to click on, and they will be instantly inserted into the code: Smart code features Being an Integrated Development Environment (IDE) and tending to be intelligent, WebStorm provides a really powerful pack of features by using which you can strongly improve your efficiency and save a lot of time. One of the most useful and hot features is code completion. WebStorm continually analyzes and processes the code of the whole project, and smartly suggests the pieces of code appropriate in the current context, and even more—alongside the method names you can find the usage of these methods. Of course, code completion itself is not a fresh innovation; however WebStorm performs it in a much smarter way than other IDEs do. WebStorm can auto-complete a lot things: Class and function names, keywords and parameters, types and properties, punctuation, and even file paths. By default, the code completion facility is on. To invoke it, simply start typing some code. For example, in the following image you can see how WebStorm suggests object methods: You can navigate through the list of suggestions using your mouse or the Up and Down arrow keys. However, the list can be very long, which makes it not very convenient to browse. To reduce it and retain only the things appropriate in the current context, keep on typing the next letters. Besides typing only initial consecutive letter of the method, you can either type something from the middle of the method name, or even use the CamelCase style, which is usually the quickest way of typing really long method names: It may turn out for some reason that the code completion isn't working automatically. To manually invoke it, press Control + Space on Mac or Ctrl + Space on Windows. To insert the suggested method, press Enter; to replace the string next to the current cursor position with the suggested method, press Tab. If you want the facility to also arrange correct syntactic surroundings for the method, press Shift + ⌘ + Enter on Mac or Ctrl + Shift + Enter on Windows, and missing brackets or/and new lines will be inserted, up to the styling standards of the current language of the code. Multiselect feature With the multiple selection (or simply multiselect) feature, you can place the cursor in several locations simultaneously, and when you will type the code it will be applied at all these positions. For example, you need to add different background colors for each table cell, and then make them of twenty-pixel width. In this case, what you need to not perform these identical tasks repeatedly and save a lot of time, is to place the cursor after the <td> tag, press Alt, and put the cursor in each <td> tag, which you are going to apply styling to: Now you can start typing the necessary attribute—it is bgcolor. Note that WebStorm performs smart code completion here too, independently of you typing something on a single line or not. You get empty values for bgcolor attributes, and you fill them out individually a bit later. You need also to change the width so you can continue typing. As cell widths are arranged to be fixed-sized, simply add the value for width attributes as well. What you get in the following image: Moreover, the multiselect feature can select identical values or just words independently, that is, you needn't place the cursor in multiple locations. Let us watch this feature by another example. Say, you changed your mind and decided to colorize not backgrounds but borders of several consecutive cells. You may instantly think of using a simple replace feature but you needn't replace all attribute occurrences, only several consecutive ones. For doing this, you can place the cursor on the first attribute, which you are going to perform changes from, and click Ctrl + G on Mac or Alt + J on Windows as many times as you need. One by one the same attributes will be selected, and you can replace the bgcolor attribute for the bordercolor one: You can also select all occurrences of any word by clicking Ctrl + command + G on Mac or Ctrl + Alt + Shift + J. To get out of the multiselect mode you have to click in a different position or use the Esc key. Refactoring facility Throughout the development process, it is almost unavoidable that you have to use refactoring. Also, the bigger code base you have, the more difficult it becomes to control the code, and when you need to refactor some code, you can most likely be up against some issues relating to, examples. naming omission or not taking into consideration function usage. You learned that WebStorm performs a thorough code analysis so it understands what is connected with what and if some changes occur it collates them and decide what is acceptable and what is not to perform in the rest of the code. Let us try a simple example. In a big HTML file you have the following line: <input id="search" type="search" placeholder="search" /> And in a big JavaScript file you have another one: var search = document.getElementById('search'); You decided to rename the id attribute's value of the input element to search_field because it is less confusing. You could simply rename it here but after that you would have to manually find all the occurrences of the word search in the code. It is evident that the word is rather frequent so you would spend a lot of time recognizing usage cases appropriate in the current context or not. And there is a high probability that you forget something important, and even more time will be spent on investigating an issue. Instead, you can entrust WebStorm with this task. Select the code unit to refactor (in our case, it is the search value of the id attribute), and click Shift + T on Mac or Ctrl + Alt + Shift + T on Windows (or simply click the Refactor menu item) to call the Refactor This dialog. There, choose the Rename… item and enter the new name for the selected code unit (search_field in our case). To get only a preview of what will happen during the refactoring process, click the Preview button, and all the changes to apply will be displayed in the bottom. You can walk through the hierarchical tree and either apply the change by clicking the Do Refactor button, or not. If you need a preview, you can simply click the Refactor button. What you will see is that the id attribute got the search_field value, not the type or placeholder values, even if they have the same value, and in the JavaScript file you got getElementById('search_field'). Note that even though WebStorm can perform various smart tasks, it still remains a program, and there can occur some issues caused by so-called artificial intelligence imperfection, so you should always be careful when performing the refactoring. In particular, manually check the var declarations because WebStorm sometimes can apply the changes to them as well but it is not always necessary because of the scope. Of course, it is just a little of what you are enabled to perform with refactoring. The basic things that the refactoring facility allows you to do are as follows: The elements in the preceding screenshot are explained as follows: Rename…: You have already got familiar with this refactoring. Once again, with it you can rename code units, and WebStorm automatically will fix all references of them in the code. The shortcut is Shift + F6. Change Signature…: This feature is used basically for changing function names, and adding/removing, reordering, or renaming function parameters, that is, changing the function signature. The shortcut is ⌘ + F6 for Mac and Ctrl + F6 for Windows. Move…: This feature enables you to move files or directories within a project, and it simultaneously repairs all references to these project elements in the code so you needn't manually repair them. The shortcut is F6. Copy…: With this feature, you can copy a file or directory or even a class, with its structure, from one place to another. The shortcut is F5. Safe Delete…: This feature is really helpful. It allows you to safely delete any code or entire files from the project. When performing this refactoring, you will be asked about whether it is needed to inspect comments and strings or all text files for the occurrence of the required piece of code or not. The shortcut is ⌘ + delete for Mac and Alt + Delete for Windows. Variable…: This refactoring feature declares a new variable whereto the result of the selected statement or expression is put. It can be useful when you realize there are too many occurrences of a certain expression so it can be turned into a variable, and the expression can just initialize it. The shortcut is Alt +⌘ + V for Mac and Ctrl + Alt + V for Windows. Parameter…: When you need to add a new parameter to some method and appropriately update its calls, use this feature. The shortcut is Alt + ⌘ + P for Mac and Ctrl + Alt + P for Windows. Method…: During this refactoring, the code block you selected undergoes analysis, through which the input and output variables get detected, and the extracted function receives the output variable as a return value. The shortcut is Alt + ⌘ + M for Mac and Ctrl + Alt + M for Windows. Inline…: The inline refactoring is working contrariwise to the extract method refactoring—it replaces surplus variables with their initializers making the code more compact and concise. The shortcut is Alt + ⌘ + N for Mac and Ctrl + Alt + N for Windows. Summary In this article, you have learned about the most distinctive features of WebStorm, which are the core constituents of improving your efficiency in building web applications. Resources for Article: Further resources on this subject: Introduction to Spring Web Application in No Time [article] Applications of WebRTC [article] Creating Java EE Applications [article]
Read more
  • 0
  • 0
  • 2421

article-image-coldfusion-8-enhancements-you-may-have-missed
Packt
22 Oct 2009
5 min read
Save for later

ColdFusion 8-Enhancements You May Have Missed

Packt
22 Oct 2009
5 min read
<cfscript> Enhancements Poor <cfscript>! It can't be easy being the younger sibling to CFML tags. Natively, you can just do more with tags. Tags are arguably easier to learn and read, especially for beginners. Yet, since its introduction in ColdFusion 4.0, <cfscript> has dutifully done its job while getting none, or little, of the love. Given that ColdFusion was marketed as an easy-to-learn tag-based language that could be adopted by non-programmers who were only familiar with HTML, why did Allaire make the effort to introduce <cfscript>? Perhaps it was an effort to add a sense of legitimacy for those who didn't view a tag-based language as a true language. Perhaps it was a matter of trying to appeal to more seasoned developers as well as beginners. In either case, <cfscript> <cfscript> wasn't without some serious limitations that prevented it from gaining widespread acceptance.<cfscript> For example, while it boasted an ECMAScript-like syntax, which perhaps would have made it attractive to JavaScript developers, it was tied tightly enough to CFML that it used CFML operators. If you were used to writing the following to loop over an array in JavaScript: for (var i=0; i<myArray.length; i++) { … it wasn't quite a natural progression to write the same loop in cfscript<cfscript>: <cfscript>for (i=1; i lt arrayLen(myArray); i=i+1) {<cfscript> On the surface, it may look similar enough. But there are a few significant differences. First, the use of "lt" to represent the traditional "<" ('less than' operator). Second, the lack of a built-in increment operator. While ColdFusion does have a built-in incrementValue() function, that doesn't really do much to bridge the gap between <cfscript> and ECMAScript. When you're used to using traditional comparison operators in a scripting language (<, =, >, etc), as well as using increment operators (++), you would likely end up losing more time than you'd save in <cfscript>. Why? Because chances are that you'd type out the loop using the traditional comparison operators, run your code, see the error, smack your forehead, modify the code, and repeat. Well, your forehead is going to love this. As of ColdFusion 8, cfscript supports all of the traditional comparison operators (<, <=, ==, !=, =>, >). In addition, both <cfscript> and CFML support the following operators as of ColdFusion 8: Operator Name ColdFusion Pre CF 8 ColdFusion 8 ++ Increment i=i+1 i++ -- Decrement i=i-1 i-- % Modulus x = a mod b x = a%b += Compound Addition x = x + y x += y -= Compound Subtraction x = x - y x -= y *= Compound Multiplication x = x * y x *= y /= Compound Division x = x / y x /= y %= Compound Modulus x = x mod y x %= y &= Compound Concatenation (Strings) str = "abc"; str = str & "def"; str = "abc"; str &= "def"; && Logical And if (x eq 1) and (y eq 2) if (x == 1) && (y == 2) || Logical Or if (x eq 1) or (y eq 2) if (x == 1) || (y == 2) ! Logical Complement if (x neq y) if (! x == y)   For people who bounce back and forth between ColdFusion and languages like JavaScript or ActionScript, this should make the transitions significantly less jarring. Array and Structure Enhancements Arrays and structures are powerful constructs within the world of programming. While the naming conventions may be different, they exist in virtually every language. Creating even a moderately complex application without them would be an unpleasant experience to say the least. Hopefully you're already putting them to use. If you are, your life just got a little bit easier. Creating Arrays One of the perceived drawbacks to a tag-based language like CFML is that it can be a bit verbose. Consider the relatively straightforward task of creating an array and populating it with a small amount of data: <cfset myArray  = arrayNew(1) /><cfset myArray[1] = "Moe" /><cfset myArray[2] = "Larry" /><cfset myArray[3] = "Curly" /> In <cfscript> it gets a little bit better by cutting out some of the redundancy of the <cfset> <cfset> tags: <cfset&gt<cfscript> myArray  = arrayNew(1); myArray[1] = "Moe"; myArray[2] = "Larry"; myArray[3] = "Curly";</cfscript></cfset&gt A little bit better. But if you're familiar with languages like JavaScript, ActionScript, Java, or others, you know that this can still be improved upon. That's exactly what Adobe's done with ColdFusion 8. ColdFusion 8 introduces shorthand notation for the creation of arrays. <cfset myArray = [] /> The code above will create an empty array. In and of itself, this doesn't seem like a tremendous time saver. But, what if you could create the array and populate it at the same time? <cfset myArray = ["Larry", "Moe", "Curly"] /> The square brackets tell ColdFusion that you're creating an array. Inside the square brackets, a comma-delimited list populates the array. One caveat to be aware of is that ColdFusion has never taken much of a liking to empty list elements. The following will throw an error: <cfset myArray = ["Larry", , "Curly"] /> <!-- don't do this --> If you're populating your array dynamically, take steps to ensure that there are no empty elements in the list.      
Read more
  • 0
  • 0
  • 2419
article-image-apache-solr-spellchecker-statistics-and-grouping-mechanism
Packt
27 Jul 2011
5 min read
Save for later

Apache Solr: Spellchecker, Statistics, and Grouping Mechanism

Packt
27 Jul 2011
5 min read
Computing statistics for the search results Imagine a situation where you want to compute some basic statistics about the documents in the results list. For example, you have an e-commerce shop where you want to show the minimum and the maximum price of the documents that were found for a given query. Of course, you could fetch all the documents and count it by yourself, but imagine if Solr can do it for you. Yes it can and this recipe will show you how to use that functionality. How to do it... Let's start with the index structure (just add this to the fields section of your schema.xml file): <field name="id" type="string" indexed="true" stored="true" required="true" /> <field name="name" type="text" indexed="true" stored="true" /> <field name="price" type="float" indexed="true" stored="true" /> The example data file looks like this: <add> <doc> <field name="id">1</field> <field name="name">Book 1</field> <field name="price">39.99</field> </doc> <doc> <field name="id">2</field> <field name="name">Book 2</field> <field name="price">30.11</field> </doc> <doc> <field name="id">3</field> <field name="name">Book 3</field> <field name="price">27.77</field> </doc> </add> Let's assume that we want our statistics to be computed for the price field. To do that, we send the following query to Solr: http://localhost:8983/solr/select?q=name:book&stats=true&stats. field=price The response Solr returned should be like this: <?xml version="1.0" encoding="UTF-8"?> <response> <lst name="responseHeader"> <int name="status">0</int> <int name="QTime">0</int> <lst name="params"> <str name="q">name:book</str> <str name="stats">true</str> <str name="stats.field">price</str> </lst> </lst> <result name="response" numFound="3" start="0"> <doc> <str name="id">1</str> <str name="name">Book 1</str> <float name="price">39.99</float> </doc> <doc> <str name="id">2</str> <str name="name">Book 2</str> <float name="price">30.11</float> </doc> <doc> <str name="id">3</str> <str name="name">Book 3</str> <float name="price">27.77</float> </doc> </result> <lst name="stats"> <lst name="stats_fields"> <lst name="price"> <double name="min">27.77</double> <double name="max">39.99</double> <double name="sum">97.86999999999999</double> <long name="count">3</long> <long name="missing">0</long> <double name="sumOfSquares">3276.9851000000003</double> <double name="mean">32.62333333333333</double> <double name="stddev">6.486118510583508</double> </lst> </lst> </lst> </response> As you can see, in addition to the standard results list, there was an additional section available. Now let's see how it works. How it works... The index structure is pretty straightforward. It contains three fields—one for holding the unique identifier (the id field), one for holding the name (the name field), and one for holding the price (the price field). The file that contains the example data is simple too, so I'll skip discussing it. The query is interesting. In addition to the q parameter, we have two new parameters. The first one, stats=true, tells Solr that we want to use the StatsComponent, the component which will calculate the statistics for us. The second parameter, stats.field=price, tells the StatsComponent which field to use for the calculation. In our case, we told Solr to use the price field. Now let's look at the result returned by Solr. As you can see, the StatsComponent added an additional section to the results. This section contains the statistics generated for the field we told Solr we want statistics for. The following statistics are available: min: The minimum value that was found in the field for the documents that matched the query max: The maximum value that was found in the field for the documents that matched the query sum: Sum of all values in the field for the documents that matched the query count: How many non-null values were found in the field for the documents that matched the query missing: How many documents that matched the query didn't have any value in the specified field sumOfSquares: Sum of all values squared in the field for the documents that matched the query mean: The average for the values in the field for the documents that matched the query stddev: The standard deviation for the values in the field for the documents that matched the query You should also remember that you can specify multiple stats.field parameters to calculate statistics for different fields in a single query. Please be careful when using this component on the multi-valued fields. It can sometimes be a performance bottleneck.
Read more
  • 0
  • 0
  • 2418

article-image-creating-media-galleries-aspnet-4-social-networking
Packt
19 Apr 2011
11 min read
Save for later

Creating media galleries with ASP.NET 4 social networking

Packt
19 Apr 2011
11 min read
In order to create the file management software for our website, we need to consider topics such as a single or multi-file upload, file system management, and image manipulation in the case of photos. In addition to this we will cover creation of pages for displaying the user's photo albums, their friends' photo albums, as well as a few data management pages. What's the problem? Apart from the standard infrastructure issues that we have to consider when building a system such as this, one of the core issues in any web-based file management system is file upload. As we all know, most server side technologies allow only one file to be uploaded at a time and ASP.NET is no different. And while we could easily buy a third-party plug-in to handle multiple files at once, we decided to provide for options to upload the files either via Silverlight or via Flash. Once we get our file upload process working we are only one-third of the way there! As we are going to be mostly concerned with uploading images, we need to consider that we will need to provide some image manipulation. With each file that is uploaded to our system we need to create a handful of different sizes of each image to be used in various scenarios across our site. To start with, we will create thumbnail, small, medium, large, and original size photos. Now while creating different size files is technically working with the file storage system, we wanted to take an extra breath with regards to the file storage concepts. We can choose to store the files on the file system or in a database. For avatars it made sense to store each with the profile data whereas for image galleries it makes more sense to store the file on the file system. While storing files to the file system we need to be very cautious as to how the file structure is defined and where and how the individual files are stored. In our case we will use system-generated GUIDs as our file names with extensions to define the different sizes that we are storing. We will dig into this more as we start to understand the details of this system. Once we have uploaded the files to the server and they are ready for our use across the system, we will take up the concept of user files versus system files. If we build the system with some forethought regarding this topic we can have a very generic file management system that can be extended for future use. We will build a personal system in this article. But as you will see with some flags in just the right places, we could just as easily build a system file manager or a group file manager. Design Let's take a look at the design for this feature. Files For our site, as we are not storing our files in the database, we need to take a closer look at what actually needs to be managed in the database so as to keep track of what is going on in the file system. In addition to standard file metadata, we need to keep a close eye on where the file actually lives—specifically which file system folder (directory on the hard drive) the file will reside in. We also need to be able to maintain which accounts own which files, or in the case of system files, which files can be viewed by anyone. Folders You may be wondering why we have a separate section regarding folders when we just touched upon the fact that we will be managing which file system folder we will be storing files in. In this section we are going to discuss folder management from a site perspective rather than a file system perspective—user folders or virtual folders if you desire. Very similar to file storage, we will be storing various metadata about each folder. We will also have to keep track of who owns which folder, who can see which folder, or in the case of system folders whether everyone can see that folder. And of course as each folder is a virtual container for a file, we will have to maintain the relationship between folders and files. File upload The file upload process will be handled by a Silverlight/Flash client. While this is not really an article about either Silverlight or Flash, we will show you how simple it is to create this Flash client, that is really just providing a way to store many files that need to be uploaded, and then uploading them one at a time in a way that the server can handle each file. For the Silverlight option, we are using code from Codeplex—http://silverlightfileupld.codeplex.com/. File system management Managing the file system may seem like a non-issue to begin with. However, keep in mind that for a community site to be successful we will need at least 10,000 or so unique users. Given that sharing photos and other files is such a popular feature of most of today's community sites, this could easily translate into a lot of uploaded files. While you could technically store a large number of files in one directory on your web server, you will find that over time your application becomes more and more sluggish. You might also run into files being uploaded with the same name using this approach. Also, you may find that you will have storage issues and need to split off some of your files to another disk or another server. Many of these issues are easily handled if we think about and address them up front. In our case we will use a unique file name for each uploaded file. We will store each file in subdirectories that are also uniquely named based on the year and month in which the file was uploaded. If you find that you have a high volume of files being uploaded each day, you may want to store your files in a folder with the year and month in the name of the folder and then in another subdirectory for each day of that month. In addition to a good naming convention on the file system, we will store the root directory for each file in the database. Initially you may only have one root for your photos, one for videos, and so on. But storing it now will allow you to have multiple roots for your file storage—one root location per file. This gives you a lot of extensibility points over time meaning that you could easily relocate entire sections of your file gallery to a separate disk or even a separate server. Data management screens Once we have all of the infrastructure in place we will need to discuss all the data management screens that will be needed—everything from the UI for uploading files to the screens for managing file metadata, to screens for creating new albums. Then we will need to tie into the rest of the framework and allow users to view their friends' uploaded file albums. The solution Let's take a look at our solution. Implementing the database First let's take a look at the tables required for these features (see the following screenshot). Files The most important thing to consider storing while in the database is of course our primary interest files. As with most other conversations regarding a physical binary file we always have to consider if we want to store the file in the database or on the file system. In this case we think it makes sense to store the file (and in the case of a photo, its various generated sizes) on the file system. This means that we will only be storing metadata about each file in our database. The most important field here to discuss is the FileSystemName. As you can see this is a GUID value. We will be renaming uploaded files to GUIDs in addition to the original extension. This allows us to ensure that all the files in any given folder are uniquely named. This removes the need for us to have to worry about overwriting other files. Then we see the FileSystemFolderID. This is a reference to the FileSystemFolders table, that lets us know the root folder location where the file is stored. Next on our list of items to discuss is the IsPublicResource flag. By its name it is quite clear that this flag will set a file as public or private and can therefore be seen by all or by its owner (AccountID). We then come to a field that may be somewhat confusing: DefaultFolderID. This has nothing to do with the file system folders. This is a user created folder. When files are uploaded initially they are put in a virtual folder. That initial virtual folder becomes the file's permanent home. This doesn't mean that it is the file's only home. As you will see later we have the concept that files can live in many virtual folders by way of subscription to the other folders. File system folders As mentioned previously, the FileSystemFolders table is responsible for letting us know where our file's root directory is. This allows us to expand our system down the road to have multiple roots, that could live on the same server but different disks, or on totally different servers. The fields in the table are Key, Path (URL), and a Timestamp. File types The FileTypes table will help us to keep track of what sort of files we are storing and working with. This is a simple lookup table that tells us the extension of a given file. Folders Folders are virtual in this case. They provide us with a way to specify a container of files. In our case we will be containing photos, in which case folders will act as photo albums. The only field worth explaining here is the flag IsPublicResource, which allows us to specify whether a folder and its resources are public or private, that is, viewable by all or viewable only by the owner. Folder types The FolderTypes table allows us a way to specify the type of folder. Currently this will simply be Name, photos, movies, and so on. However, down the road you may want to specify an icon for each folder type in which case this is the place where you would want to assign that specification. Account folders In the AccountFolders table we are able to specify additional ownership of a folder. So in the case that a folder is a public resource and external resources can own folders, we simply create the new ownership relationship here. This is not permanent ownership. It is still specified with the Folders table's AccountID. This is a temporary ownership across many Accounts. As you can see in the previous screenshot we have the owner (AccountID) and the folder that is to be owned (FolderID). Account files Similar to the AccountFolders table, the AccountFiles table allows someone to subscribe to a specific file. This could be used for the purposes of Favorites or similar concepts. The makeup of this table is identical to AccountFolders. You have the owner and the file being owned. Folder files The FolderFiles table allows an Account to not only subscribe to a file, similar to the Favorites concept, but it also allows a user to take one of my files and put it into one of their folders as though the file itself belonged to them. As you can see in the previous screenshot this is primarily a table that holds the keys to the other tables. We have the FolderID, FileID, and AccountID for each file. This clearly specifies who is taking ownership of what and where they want it to be placed. Creating the relationships Once all the tables are created we can then create all the relationships. For this set of tables we have relationships between the following tables: Files and FileSystemFolders Files and FileTypes Files and Folders Files and Accounts Folders and Accounts Folders and FolderTypes AccountFolders and Accounts AccountFolders and Folders AccountFiles and Accounts AccountFiles and Files FolderFiles and Accounts FolderFiles and Folders FolderFiles and Files
Read more
  • 0
  • 0
  • 2412

article-image-ejb-31-introduction-interceptors
Packt
06 Jul 2011
7 min read
Save for later

EJB 3.1: Introduction to Interceptors

Packt
06 Jul 2011
7 min read
EJB 3.1 Cookbook Build real world EJB solutions with a collection of simple but incredibly effective recipes with this book and eBook Introduction Most applications have cross-cutting functions which must be performed. These cross-cutting functions may include logging, managing transactions, security, and other aspects of an application. Interceptors provide a way to achieve these cross-cutting activities. The use of interceptors provides a way of adding functionality to a business method without modifying the business method itself. The added functionality is not intermeshed with the business logic resulting in a cleaner and easier to maintain application. Aspect Oriented Programming (AOP) is concerned with providing support for these cross-cutting functions in a transparent fashion. While interceptors do not provide as much support as other AOP languages, they do offer a good level of support. Interceptors can be: Used to keep business logic separate from non-business related activities Easily enabled/disabled Provide consistent behavior across an application Interceptors are specific methods invoked around a method or methods of a target EJB. We will use the term target, to refer to the class containing the method(s) an interceptor will be executing around. The interceptor's method will be executed before the EJB's method is executed. When the interceptor method executes, it is passed as an InvocationContext object. This object provides information relating to the state of the interceptor and the target. Within the interceptor method, the InvocationContext's method proceed can be issued that will result in the target's business method being executed or, as we will see shortly, the next interceptor in the chain. When the business method returns, the interceptor continues execution. This permits execution of code before and after the execution of a business method. Interceptors can be used with: Stateless session EJBs Stateful session EJBs Singleton session EJBs Message-driven beans The @Interceptors annotation defines which interceptors will be executed for all or individual methods of a class. Interceptor classes use the same lifecycle of the EJB they are applied to, in the case of stateful EJBs, which means the interceptor could be passivated and activated. In addition, they support the use of dependency injection. The injection is done using the EJB's naming context. More than one interceptor can be used at a time. The sequence of interceptor execution is referred to as an interceptor chain. For example, an application may need to start a transaction based on the privileges of a user. These actions should also be logged. An interceptor can be defined for each of these activities: validating the user, starting the transaction, and logging the event. The use of interceptor chaining is illustrated in the Using interceptors to handle application statistics recipe. Lifecycle callbacks such as @PreDestroy and @PostConstruct can also be used within interceptors. They can access interceptor state information as discussed in the Using lifecycle methods in interceptors recipe. Interceptors are useful for: Validating parameters and potentially changing them before they are sent to a method Performing security checks Performing logging Performing profiling Gathering statistics An example of parameter validation can be found in the Using the InvocationContext to verify parameters recipe. Security checks are illustrated in the Using interceptors to enforce security recipe. The use of interceptor chaining to record a method's hit count and the time spent in the method is discussed in the Using interceptors to handle application statistics recipe. Interceptors can also be used in conjunction with timer services. The recipes in this article are based largely around a conference registration application as developed in the first recipe. It will be necessary to create this application before the other recipes can be demonstrated. Creating the Registration Application A RegistrationApplication is developed in this recipe. It provides the ability of attendees to register for a conference. The application will record their personal information using an entity and other supporting EJBs. This recipe details how to create this application. Getting ready The RegistrationApplication consists of the following classes: Attendee – An entity representing a person attending the conference AbstractFacade – A facade-based class AttendeeFacade – The facade class for the Attendee class RegistrationManager – Used to control the registration process RegistrationServlet – The GUI interface for the application The steps used to create this application include: Creating the Attendee entity and its supporting classes Creating a RegistrationManager EJB to control the registration process Creating a RegistrationServlet to drive the application The RegistrationManager will be the primary vehicle for the demonstration of interceptors. How to do it... Create a Java EE application called RegistrationApplication. Add a packt package to the EJB module and a servlet package in the application's WAR module. Next, add an Attendee entity to the packt package. This entity possesses four fields: name, title, company, and id. The id field should be auto generated. Add getters and setters for the fields. Also add a default constructor and a three argument constructor for the first three fields. The major components of the class are shown below without the getters and setters. @Entity public class Attendee implements Serializable { private String name; private String title; private String company; private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; public Attendee() { } public Attendee(String name, String title, String company) { this.name = name; this.title = title; this.company = company; } } Next, add an AttendeeFacade stateless session bean which is derived from the AbstractFacade class. The AbstractFacade class is not shown here. @Stateless public class AttendeeFacade extends AbstractFacade<Attendee> { @PersistenceContext(unitName = "RegistrationApplication-ejbPU") private EntityManager em; protected EntityManager getEntityManager() { return em; } public AttendeeFacade() { super(Attendee.class); } } Add a RegistrationManager stateful session bean to the packt package. Add a single method, register, to the class. The method should be passed three strings for the name, title, and company of the attendee. It should return an Attendee reference. Use dependency injection to add a reference to the AttendeeFacade. In the register method, create a new Attendee and then use the AttendeeFacade class to create it. Next, return a reference to the Attendee. @Stateful public class RegistrationManager { @EJB AttendeeFacade attendeeFacade; Attendee attendee; public Attendee register(String name, String title, String company) { attendee = new Attendee(name, title, company); attendeeFacade.create(attendee); return attendee; } } In the servlet package of the WAR module, add a servlet called RegistrationServlet. Use dependency injection to add a reference to the RegistrationManager. In the try block of the processRequest method, use the register method to register an attendee and then display the attendee's name. public class RegistrationServlet extends HttpServlet { @EJB RegistrationManager registrationManager; protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); try { out.println("<html>"); out.println("<head>"); out.println("<title>Servlet RegistrationServlet</title>"); out.println("</head>"); out.println("<body>"); Attendee attendee = registrationManager.register("Bill Schroder", "Manager", "Acme Software"); out.println("<h3>" + attendee.getName() + " has been registered</h3>"); out.println("</body>"); out.println("</html>"); } finally { out.close(); } } ... } Execute the servlet. The output should appear as shown in the following screenshot: How it works... The Attendee entity holds the registration information for each participant. The RegistrationManager session bean only has a single method at this time. In later recipes we will augment this class to add other capabilities. The RegistrationServlet is the client for the EJBs.
Read more
  • 0
  • 0
  • 2412
article-image-magento-payment-and-shipping-method
Packt
13 Mar 2013
4 min read
Save for later

Magento : Payment and shipping method

Packt
13 Mar 2013
4 min read
(For more resources related to this topic, see here.) Payment and shipping method Magento CE comes with several Payment and Shipping methods out of the box. Since total payment is calculated based on the order and shipping cost, it makes sense to first define our shipping method. The available shipping methods can be found in Magento Admin Panel under the Shipping Methods section in System | Configuration | Sales. Flat Rate, Table Rates, and Free Shipping methods fall under the category of static methods while others are dynamic. Dynamic means retrieval of rates from various shipping providers. Static means that shipping rates are based on a predefined set of rules. For the live production store you might be interested in obtaining the merchant account for one of the dynamic methods because they enable potentially more precise shipping cost calculation in regards to product weight. Clean installation of Magento CE comes with the Flat Rate shipping method turned on, so be sure to turn it off in production if not required by setting the Enabled option in System | Configuration | Sales | Shipping Methods | Flat Rate to No. Setting up the dynamic methods is pretty easy; all you need to do is to obtain the access data from the shipping provider such as FedEx then configure that access data under the proper shipping method configuration area, for example, System | Configuration | Sales | Shipping Methods | FedEx. Payment method configuration is available under System | Configuration | Sales | Payment Methods. Similar to shipping methods, these are defined into two main groups, static and dynamic. Dynamic in this case means that an external payment gateway provider such as PayPal will actually charge the customer's credit card upon successful checkout. Static simply means that checkout will be completed but you as a merchant will have to make sure that the customer actually paid the order prior to shipping the products to him. Clean installation of Magento CE comes with Saved CC, Check / Money order, and Zero Subtotal Checkout turned on, so be sure to turn these off in production if they are not required. How to do it... To configure the shipping method: Log in to the Magento Admin Panel and go to System | Configuration | Sales | Shipping Methods. Select an appropriate payment method, configure its options, and click on the Save Config button. To configure payment method: Log in to the Magento Admin Panel and go to System | Configuration | Sales | Payment Methods. Select an appropriate payment method, configure its options, and click on the Save Config button. How it works... Once a certain shipping method is turned on, it will be visible on the frontend to the customer during the checkout's so-called Shipping Method step, as shown in the screenshot that follows. The shipping method's price is based on the customer's shipping address, products in cart, applied promo rules, and possibly other parameters. Numerous other shipping modules are provided at Magento Connect on the site http://www.magentocommerce.com/magento-connect/integrations/shippingfulfillment.html and new ones are uploaded often so this is by no means a final list of shipping methods. Once a certain payment method is turned on, it will be visible on the frontend to the customer during the checkout's so-called Payment Information step, as shown in the following screenshot: Additionally, there are numerous other payment modules provided at Magento Connect on the site at http://www.magentocommerce.com/magento-connect/integrations/payment-gateways.html. Summary In this article, we have explained the payment and shipping method you may require when you build your own shop using Magneto. Resources for Article : Further resources on this subject: Creating and configuring a basic mobile application [Article] Installing WordPress e-Commerce Plugin and Activating Third-party Themes [Article] Magento: Exploring Themes [Article]
Read more
  • 0
  • 0
  • 2407

article-image-your-first-application-aptana-radrails
Packt
28 Oct 2009
7 min read
Save for later

Your First Application in Aptana RadRails

Packt
28 Oct 2009
7 min read
Here we are! programming in a powerful language specially designed for the Web and using an IDE that promises to help us with many of the mechanical tasks involved in the coding. If you have been already programming with Rails, you probably know that if we take advantage of scaffolding we can have a simple web application for table maintenance in a matter of minutes (yes/no typo here, it really takes just a few minutes). And we are even talking about the database table creation process. If we wanted to add validations, a nice design, and some more complexity we would be talking about a few hours. Still pretty impressive, depending from which programming language (or framework) you are coming. The truth is, creating the wireframe of your application in Rails is quick and easy enough even from the command line, but we'll be learning in this article how to do it a bit more comfortably by using RadRails for creating your models, controllers, database migrations, and for starting your server and test your application. Basic Views Most of the time when working with our IDE we will be using the Editor Area. Apart from that, two of the views we will be working with more frequently are the Ruby Explorer—the enhanced equivalent of the Rails Navigator, if you were using RadRails Classic—and the Console. Both of these views are fairly easy to use, but since they will be present at almost every point of the development process, it's interesting to get familiar with them from the beginning. The Ruby Explorer View If you have already opened the Rails perspective, then you should be seeing the Ruby Explorer at the left of your workbench. This view looks like a file-tree pane. At the root level, you will find a folder for each of the projects in your workspace. By clicking on the icon to the left of the project name, you will unfold its files and folders. The Ruby files can be expanded too, displaying the modules, classes, variables, and methods defined in the selected file. By clicking on any of these elements you will be taken directly to the line in which it is defined. Before navigating through the contents of a project, we have to open it. Just right-click on its name and choose Open Project. When opening a project, Eclipse will ask you if you want to open the referenced projects. By default, your projects don't have any references and that's the most common scenario when working with a Rails application. If you want, you can include references to other projects on the workspace so you can open and close them together. To view and change the project references, you can right-click on the project name, then select Properties. Notice you can also get here from the Project menu by selecting Properties. In the properties dialog, you have to select Project References. Here you will see a list of all the available projects in the workspace. Just check or uncheck all the projects you want to reference. Once your project is open, the mechanism for navigating the contents is pretty straightforward. You can open or close any sub-folders and you can right-click on any item to get a context menu. From this menu you can perform common file operations like creating, renaming, or deleting a file. We will see more details about creating new files when talking about the Editor Area. There is also a Properties option from where you can change the encoding for a particular file, or the file attributes (read only, for example). The Properties option is also available at the project level. Also in the context menu, you can see there is a Tail option. This will work like the tail command in UNIX, displaying the contents of a file as it's changing. This option is especially useful for a quick monitoring of the log files of your application. You can also find in the context menu two options with the names Compare With and Replace With. If you select either of them, you will see a new menu in which there is an option named Local history. This functionality is really interesting. You can compare your current version against an older version of the same file, or you could replace the contents with a previous one. This can be a life-saver because when using it on a folder the local history will contain copies even of deleted files. Comparing a file against another copy is a powerful tool, which can also be used when working with repositories or to compare different files between them. Let's try it and see how it works. Open any of the files in your project tree by double-clicking on the file name. Now go to the Editor Area and add some lines with Mumbo-Jumbo text. After you are done, click on the save icon of the toolbar or select Save in the File menu. Now let's go back to the Ruby Explorer, double-click on the file name and select Compare With | Local History. You will see there are some entries here, one for each time we saved the file. If this was the first time you worked with the file, then there will be only two versions, the original and the one you just saved. Double-click on the oldest local version you have. Now a new editor will be opened. The editor is divided into three panes, the top one displaying structural differences, the bottom-left one with the code of the current version, and the bottom-right one with the old version of the code. At the top pane, you will see the structural differences between the versions being compared. For every added or deleted method or variable—at instance or class level, you will see the name of the element with an icon displaying a plus or a minus sign. If a method exists in both versions, but its content was changed, the name will be displayed without any additional icons. When reviewing the differences/changes you will see the editors at both sides are linked with a line representing the parts that are not equal between the files. When you are on a given change/difference you can select the icon for 'copying current change from right to left' (or the other way round, depending in which of the files the change is), which will override the contents of the left editor with those of the right. You can also just manually edit or copy/paste in your editor as usual. There is an interesting icon labeled 'Copy all non-conflicting changes from right to left' that will do exactly as it promises. Any changes that can be automatically merged, will be incorporated to your editor. Depending on the differences between the files, the icon could be the contrary 'Copy all non-conflicting changes from left to right'. When you finish comparing or modifying your current editor, remember to save the contents of the editor in order to keep your changes. If you just wanted to review the changes without any modifications, you can directly scroll down the editors, use the 'Previous' or 'Next' icons, or use the quick marks by the right margin. You can also compare two files instead of comparing a file against an older version. Go to the Ruby Explorer and select one of the files, then hold down the control key and select another one. With both files selected, you right-click and select Compare With and then Each Other. Once opened, the compare editor works exactly the same as when comparing with an old version of the same file.
Read more
  • 0
  • 0
  • 2406
Modal Close icon
Modal Close icon