Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - CMS and E-Commerce

830 Articles
article-image-working-client-object-model-microsoft-sharepoint
Packt
05 Oct 2011
9 min read
Save for later

Working with Client Object Model in Microsoft Sharepoint

Packt
05 Oct 2011
9 min read
Microsoft SharePoint 2010 is the best-in-class platform for content management and collaboration. With Visual Studio, developers have an end-to-end business solutions development IDE. To leverage this powerful combination of tools it is necessary to understand the different building blocks of SharePoint. In this article by Balaji Kithiganahalli, author of Microsoft SharePoint 2010 Development with Visual Studio 2010 Expert Cookbook, we will cover: Creating a list using a Client Object Model Handling exceptions Calling Object Model asynchronously (For more resources on Microsoft Sharepoint, see here.) Introduction Since out-of-the-box web services does not provide the full functionality that the server model exposes, developers always end up creating custom web services for use with client applications. But there are situations where deploying custom web services may not be feasible. For example, if your company is hosting SharePoint solutions in a cloud environment where access to the root folder is not permitted. In such cases, developing client applications with new Client Object Model (OM) will become a very attractive proposition. SharePoint exposes three OMs which are as follows: Managed Silverlight JavaScript (ECMAScript) Each of these OMs provide object interface to functionality exposed in Microsoft. SharePoint namespace. While none of the object models expose the full functionality that the server-side object exposes, the understanding of server Object Models will easily translate for a developer to develop applications using an OM. A managed OM is used to develop custom .NET managed applications (service, WPF, or console applications). You can also use the OM for ASP.NET applications that are not running in the SharePoint context as well. A Silverlight OM is used by Silverlight client applications. A JavaScript OM is only available to applications that are hosted inside the SharePoint applications like web part pages or application pages. Even though each of the OMs provide different programming interfaces to build applications, behind the scenes, they all call a service called Client.svc to talk to SharePoint. This Client.svc file resides in the ISAPI folder. The service calls are wrapped around with an Object Model that developers can use to make calls to SharePoint server. This way, developers make calls to an OM and the calls are all batched together in XML format to send it to the server. The response is always received in JSON format which is then parsed and associated with the right objects. The basic architectural representation of the client interaction with the SharePoint server is as shown in the following image: The three Object Models come in separate assemblies. The following table provides the locations and names of the assemblies: Object OM Location Names Managed ISAPI folder Microsoft.SharePoint.Client.dll Microsoft.SharePoint.Client.Runtime.dll Silverlight LayoutsClientBin Microsoft.SharePoint.Client. Silverlight.dll Microsoft.SharePoint.Client. Silverlight.Runtime.dll JavaScript Layouts SP.js The Client Object Model can be downloaded as a redistributable package from the Microsoft download center at:http://www.microsoft.com/downloads/en/details.aspx?FamilyID=b4579045-b183-4ed4-bf61-dc2f0deabe47 OM functionality focuses on objects at the site collection and below. The main reason being that it will be used to enhance the end-user interaction. Hence the OM is a smaller subset of what is available through the server Object Model. In all three Object Models, main object names are kept the same, and hence the knowledge from one OM is easily portable to another. As indicated earlier, knowledge of server Object Models easily transfer development using client OM The following table shows some of the major objects in the OM and their equivalent names in the server OM: Client OM Server OM ClientContext SPContext Site SPSite Web SPWeb List SPList ListItem SPListItem Field SPField Creating a list using a Managed OM In this recipe, we will learn how to create a list using a Managed Object Model. We will also add a new column to the list and insert about 10 rows of data to the list. For this recipe, we will create a console application that makes use of a generic list template. Getting ready You can copy the DLLs mentioned earlier to your development machine. Your development machine need not have the SharePoint server installed. But you should be able to access one with proper permission. You also need Visual Studio 2010 IDE installed on the development machine. How to do it… In order to create a list using a Managed OM, adhere to the following steps: Launch your Visual Studio 2010 IDE as an administrator (right-click the shortcut and select Run as administrator). Select File | New | Project . The new project wizard dialog box will be displayed (make sure to select .NET Framework 3.5 in the top drop-down box). Select Windows Console application under the Visual C# | Windows | Console Application node from Installed Templates section on the left-hand side. Name the project OMClientApplication and provide a directory location where you want to save the project and click on OK to create the console application template. To add a references to Microsoft.SharePoint.Client.dll and Microsoft.SharePoint.Client.Runtime.dll, go to the menu Project | Add Reference and navigate to the location where you copied the DLLs and select them as shown In the following screenshot: Now add the code necessary to create a list. A description field will also be added to our list. Your code should look like the following (make sure to change the URL passed to the ClientContext constructor to your environment): using Microsoft.SharePoint.Client;namespace OMClientApplication{ class Program { static void Main(string[] args) { using (ClientContext clientCtx = new ClientContext("http://intsp1")) { Web site = clientCtx.Web; // Create a list. ListCreationInformation listCreationInfo = new ListCreationInformation(); listCreationInfo.Title = "OM Client Application List"; listCreationInfo.TemplateType = (int)ListTemplateType.GenericList; listCreationInfo.QuickLaunchOption = QuickLaunchOptions.On; List list = site.Lists.Add(listCreationInfo); string DescriptionFieldSchema = "<Field Type='Note' DisplayName='Item Description' Name='Description' Required='True' MaxLength='500' NumLines='10' />"; list.Fields.AddFieldAsXml(DescriptionFieldSchema, true, AddFieldOptions.AddToDefaultContentType);// Insert 10 rows of data - Concat loop Id with "Item Number" string. for (int i = 1; i < 11; ++i) { ListItemCreationInformation listItemCreationInfo = new ListItemCreationInformation(); ListItem li = list.AddItem(listItemCreationInfo); li["Title"] = string.Format("Item number {0}",i); li["Item_x0020_Description"] = string.Format("Item number {0} from client Object Model", i); li.Update(); } clientCtx.ExecuteQuery(); Console.WriteLine("List creation completed"); Console.Read(); } } }} Build and execute the solution by pressing F5 or from the menu Debug | Start Debugging . This should bring up the command window with a message indicating that the List creation completed as shown in the following screenshot. Press Enter and close the command window. Navigate to your site to verify that the list has been created. The following screenshot shows the list with the new field and ten items inserted: (Move the mouse over the image to enlarge.) How it works... The first line of the code in the Main method is to create an instance of ClientContext class. The ClientContext instance provides information about the SharePoint server context in which we will be working. This is also the proxy for the server we will be working with. We passed the URL information to the context to get the entry point to that location. When you have access to the context instance, you can browse the site, web, and list objects of that location. You can access all the properties like Name , Title , Description , and so on. The ClientContext class implements the IDisposable interface, and hence you need to use the using statement. Without that you have to explicitly dispose the object. If you do not do so, your application will have memory leaks. For more information on disposing objects refer to MSDN at:http://msdn.microsoft.com/en-us/library/ee557362.aspx From the context we were able to obtain access to our site object on which we wanted to create the list. We provided list properties for our new list through the ListCreationInformation instance. Through the instance of ListCreationInformation, we set the values to list properties like name, the templates we want to use, whether the list should be shown in the quick launch bar or not, and so on. We added a new field to the field collection of the list by providing the field schema. Each of the ListItems are created by providing ListItemCreationInformation. The ListItemCreationInformation is similar to ListCreationInformation where you would provide information regarding the list item like whether it belongs to a document library or not, and so on. For more information on ListCreationInformation and ListItemCreationInformation members refer to MSDN at:http://msdn.microsoft.com/en-us/library/ee536774.aspx. All of this information is structured as an XML and batched together to send it to the server. In our case, we created a list and added a new field and about ten list items. Each of these would have an equivalent server-side call, and hence, all these multiple calls were batched together to send it to the server. The request is only sent to the server when we issue an ExecuteQuery or ExecuteQueryAsync method in the client context. The ExecuteQuery method creates an XML request and passes that to Client.svc. The application waits until the batch process on the server is completed and then returns back with the JSON response. Client.svc makes the server Object Model call to execute our request. There's more... By default, ClientContext instance uses windows authentication. It makes use of the windows identity of the person executing the application. Hence, the person running the application should have proper authorization on the site to execute the commands. Exceptions will be thrown if proper permissions are not available for the user executing the application. We will learn about handling exceptions in the next recipe. It also supports Anonymous and FBA (ASP.Net form based authentication) authentication. The following is the code for passing FBA credentials if your site supports it: using (ClientContext clientCtx = new ClientContext("http://intsp1")){clientCtx.AuthenticationMode = ClientAuthenticationMode.FormsAuthentication;FormsAuthenticationLoginInfo fba = new FormsAuthenticationLoginInfo("username", "password");clientCtx.FormsAuthenticationLoginInfo = fba;//Business Logic} Impersonation In order to impersonate you can pass in credential information to the ClientContext as shown in the following code: clientCtx.Credentials = new NetworkCredential("username", "password", "domainname"); Passing credential information this way is supported only in Managed OM.  
Read more
  • 0
  • 0
  • 9973

article-image-creating-responsive-magento-theme-bootstrap-3
Packt
21 Apr 2014
13 min read
Save for later

Creating a Responsive Magento Theme with Bootstrap 3

Packt
21 Apr 2014
13 min read
In this article, by Andrea Saccà, the author of Mastering Magento Theme Design, we will learn how to integrate the Bootstrap 3 framework and how to develop the main theme blocks. The following topics will be covered in this article: An introduction to Bootstrap Downloading Bootstrap (the current Version 3.1.1) Downloading and including jQuery Integrating the files into the theme Defining the main layout design template (For more resources related to this topic, see here.) An introduction to Bootstrap 3 Bootstrap is a sleek, intuitive, powerful, mobile-first frontend framework that enables faster and easier web development, as shown in the following screenshot: Bootstrap 3 is the most popular frontend framework that is used to create mobile-first websites. It includes a free collection of buttons, CSS components, and JavaScript to create websites or web applications; it was created by the Twitter team. Downloading Bootstrap (the current Version 3.1.1) First, you need to download the latest version of Bootstrap. The current version is 3.0. You can download the framework from http://getbootstrap.com/. The fastest way to download Bootstrap 3 is to download the precompiled and minified versions of CSS, JavaScript, and fonts. So, click on the Download Bootstrap button and unzip the file you downloaded. Once the archive is unzipped, you will see the following files: We need to take only the minified version of the files, that is, bootstrap.min.css from css, bootstrap.min.js from js, and all the files from font. For development, you can use bootstrap.css so that you can inspect the code and learn, and then switch to bootstrap.min.css when you go live. Copy all the selected files (CSS files inside the css folder, the .js files inside the js folder, and the font files inside the fonts folder) in the theme skin folder at skin/frontend/bookstore/default. Downloading and including jQuery Bootstrap is dependent on jQuery, so we have to download and include it before including boostrap.min.js. So, download jQuery from http://jquery.com/download/. The preceding URL takes us to the following screenshot: We will use the compressed production Version 1.10.2. Once you download jQuery, rename the file as jquery.min.js and copy it into the js skin folder at skin/frontend/bookstore/default/js/. In the same folder, also create the jquery.scripts.js file, where we will insert our custom scripts. Magento uses Prototype as the main JavaScript library. To make jQuery work correctly without conflicts, you need to insert the no conflict code in the jquery.scripts.js file, as shown in the following code: // This is important!jQuery.noConflict(); jQuery(document).ready(function() { // Insert your scripts here }); The following is a quick recap of CSS and JS files: Integrating the files into the theme Now that we have all the files, we will see how to integrate them into the theme. To declare the new JavaScript and CSS files, we have to insert the action in the local.xml file located at app/design/frontend/bookstore/default/layout. In particular, the file declaration needs to be done in the default handle to make it accessible by the whole theme. The default handle is defined by the following tags: <default> . . . </default> The action to insert the JavaScript and CSS files must be placed inside the reference head block. So, open the local.xml file and first create the following block that will define the reference: <reference name="head"> … </reference> Declaring the .js files in local.xml The action tag used to declare a new .js file located in the skin folder is as follows: <action method="addItem"> <type>skin_js</type><name>js/myjavascript.js</name> </action> In our skin folder, we copied the following three .js files: jquery.min.js jquery.scripts.js bootstrap.min.js Let's declare them as follows: <action method="addItem"> <type>skin_js</type><name>js/jquery.min.js</name> </action> <action method="addItem"> <type>skin_js</type><name>js/bootstrap.min.js</name> </action> <action method="addItem"> <type>skin_js</type><name>js/jquery.scripts.js</name> </action> Declaring the CSS files in local.xml The action tag used to declare a new CSS file located in the skin folder is as follows: <action method="addItem"> <type>skin_css</type><name>css/mycss.css</name> </action> In our skin folder, we have copied the following three .css files: bootstrap.min.css styles.css print.css So let's declare these files as follows: <action method="addItem"> <type>skin_css</type><name>css/bootstrap.min.css</name> </action> <action method="addItem"> <type>skin_css</type><name>css/styles.css</name> </action> <action method="addItem"> <type>skin_css</type><name>css/print.css</name> </action> Repeat this action for all the additional CSS files. All the JavaScript and CSS files that you insert into the local.xml file will go after the files declared in the base theme. Removing and adding the style.css file By default, the base theme includes a CSS file called styles.css, which is hierarchically placed before the bootstrap.min.css. One of the best practices to overwrite the Bootstrap CSS classes in Magento is to remove the default CSS files declared by the base theme of Magento, and declare it after Bootstrap's CSS files. Thus, the styles.css file loads after Bootstrap, and all the classes defined in it will overwrite the boostrap.min.css file. To do this, we need to remove the styles.css file by adding the following action tag in the xml part, just before all the css declaration we have already made: <action method="removeItem"> <type>skin_css</type> <name>css/styles.css</name> </action> Hence, we removed the styles.css file and added it again just after adding Bootstrap's CSS file (bootstrap.min.css): <action method="addItem"> <type>skin_css</type> <stylesheet>css/styles.css</stylesheet> </action> If it seems a little confusing, the following is a quick view of the CSS declaration: <!-- Removing the styles.css declared in the base theme --> <action method="removeItem"> <type>skin_css</type> <name>css/styles.css</name> </action> <!-- Adding Bootstrap Css --> <action method="addItem"> <type>skin_css</type> <stylesheet>css/bootstrap.min.css</stylesheet> </action> <!-- Adding the styles.css again --> <action method="addItem"> <type>skin_css</type> <stylesheet>css/styles.css</stylesheet> </action> Adding conditional JavaScript code If you check the Bootstrap documentation, you can see that in the HTML5 boilerplate template, the following conditional JavaScript code is added to make Internet Explorer (IE) HTML 5 compliant: <!--[if lt IE 9]> <script src = "https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"> </script> <script src = "https://oss.maxcdn.com/libs/respond.js/1.3.0/respond.min.js"> </script> <![endif]--> To integrate them into the theme, we can declare them in the same way as the other script tags, but with conditional parameters. To do this, we need to perform the following steps: Download the files at https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js and https://oss.maxcdn.com/libs/respond.js/1.3.0/respond.min.js. Move the downloaded files into the js folder of the theme. Always integrate JavaScript through the .xml file, but with the conditional parameters as follows: <action method="addItem"> <type>skin_js</type><name>js/html5shiv.js</name> <params/><if>lt IE 9</if> </action> <action method="addItem"> <type>skin_js</type><name>js/respond.min.js</name> <params/><if>lt IE 9</if> </action> A quick recap of our local.xml file Now, after we insert all the JavaScript and CSS files in the .xml file, the final local.xml file should look as follows: <?xml version="1.0" encoding="UTF-8"?> <layout version="0.1.0"> <default translate="label" module="page"> <reference name="head"> <!-- Adding Javascripts --> <action method="addItem"> <type>skin_js</type> <name>js/jquery.min.js</name> </action> <action method="addItem"> <type>skin_js</type> <name>js/bootstrap.min.js</name> </action> <action method="addItem"> <type>skin_js</type> <name>js/jquery.scripts.js</name> </action> <action method="addItem"> <type>skin_js</type> <name>js/html5shiv.js</name> <params/><if>lt IE 9</if> </action> <action method="addItem"> <type>skin_js</type> <name>js/respond.min.js</name> <params/><if>lt IE 9</if> </action> <!-- Removing the styles.css --> <action method="removeItem"> <type>skin_css</type><name>css/styles.css</name> </action> <!-- Adding Bootstrap Css --> <action method="addItem"> <type>skin_css</type> <stylesheet>css/bootstrap.min.css</stylesheet> </action> <!-- Adding the styles.css --> <action method="addItem"> <type>skin_css</type> <stylesheet>css/styles.css</stylesheet> </action> </reference> </default> </layout> Defining the main layout design template A quick tip for our theme is to define the main template for the site in the default handle. To do this, we have to define the template into the most important reference, root. In a few words, the root reference is the block that defines the structure of a page. Let's suppose that we want to use a main structure having two columns with the left sidebar for the theme To change it, we should add the setTemplate action in the root reference as follows: <reference name="root"> <action method="setTemplate"> <template>page/2columns-left.phtml</template> </action> </reference> You have to insert the reference name "root" tag with the action inside the default handle, usually before every other reference. Defining the HTML5 boilerplate for main templates After integrating Bootstrap and jQuery, we have to create our HTML5 page structure for the entire base template. The following are the structure files that are located at app/design/frontend/bookstore/template/page/: 1column.phtml 2columns-left.phtml 2columns-right.phtml 3columns.phtml The Twitter Bootstrap uses scaffolding with containers, a row, and 12 columns. So, its page layout would be as follows: <div class="container"> <div class="row"> <div class="col-md-3"></div> <div class="col-md-9"></div> </div> </div> This structure is very important to create responsive sections of the store. Now we will need to edit the templates to change to HMTL5 and add the Bootstrap scaffolding. Let's look at the following 2columns-left.phtml main template file: <!DOCTYPE HTML> <html><head> <?php echo $this->getChildHtml('head') ?> </head> <body <?php echo $this->getBodyClass()?' class="'.$this->getBodyClass().'"':'' ?>> <?php echo $this->getChildHtml('after_body_start') ?> <?php echo $this->getChildHtml('global_notices') ?> <header> <?php echo $this->getChildHtml('header') ?> </header> <section id="after-header"> <div class="container"> <?php echo $this->getChildHtml('slider') ?> </div> </section> <section id="maincontent"> <div class="container"> <div class="row"> <?php echo $this->getChildHtml('breadcrumbs') ?> <aside class="col-left sidebar col-md-3"> <?php echo $this->getChildHtml('left') ?> </aside> <div class="col-main col-md-9"> <?php echo $this->getChildHtml('global_messages') ?> <?php echo $this->getChildHtml('content') ?> </div> </div> </div> </section> <footer id="footer"> <div class="container"> <?php echo $this->getChildHtml('footer') ?> </div> </footer> <?php echo $this->getChildHtml('before_body_end') ?> <?php echo $this->getAbsoluteFooter() ?> </body> </html> You will notice that I removed the Magento layout classes col-main, col-left, main, and so on, as these are being replaced by the Bootstrap classes. I also added a new section, after-header, because we will need it after we develop the home page slider. Don't forget to replicate this structure on the other template files 1column.phtml, 2columns-right.phtml, and 3columns.phtml, changing the columns as you need. Summary We've seen how to integrate Bootstrap and start the development of a Magento theme with the most famous framework in the world. Bootstrap is very neat, flexible, and modular, and you can use it as you prefer to create your custom theme. However, please keep in mind that it can be a big drawback on the loading time of the page. Following these techniques by adding the JavaScript and CSS classes via XML, you can allow Magento to minify them to speed up the loading time of the site. Resources for Article: Further resources on this subject: Integrating Twitter with Magento [article] Magento : Payment and shipping method [article] Magento: Exploring Themes [article]
Read more
  • 0
  • 0
  • 9849

article-image-creating-and-consuming-web-services-cakephp-13
Packt
10 Mar 2011
7 min read
Save for later

Creating and Consuming Web Services in CakePHP 1.3

Packt
10 Mar 2011
7 min read
CakePHP 1.3 Application Development Cookbook Over 70 great recipes for developing, maintaining, and deploying web applications     Creating an RSS feed RSS feeds are a form of web services, as they provide a service, over the web, using a known format to expose data. Due to their simplicity, they are a great way to introduce us to the world of web services, particularly as CakePHP offers a built in method to create them. In this recipe, we will produce a feed for our site that can be used by other applications. Getting ready To go through this recipe we need a sample table to work with. Create a table named posts, using the following SQL statement: CREATE TABLE `posts`(posts `id` INT NOT NULL AUTO_INCREMENT, `title` VARCHAR(255) NOT NULL, `body` TEXT NOT NULL, `created` DATETIME NOT NULL, `modified` DATETIME NOT NULL, PRIMARY KEY(`id`) ); Add some sample data, using the following SQL statements: INSERT INTO `posts`(`title`,posts `body`, `created`, `modified`) VALUES ('Understanding Containable', 'Post body', NOW(), NOW()), ('Creating your first test case', 'Post body', NOW(), NOW()), ('Using bake to start an application', 'Post body', NOW(), NOW()), ('Creating your first helper', 'Post body', NOW(), NOW()), ('Adding indexes', 'Post body', NOW(), NOW()); We proceed now to create the required controller. Create the class PostsController in a file named posts_controller.php and place it in your app/controllers folder, with the following contents: <?php class PostsController extends AppController { public function index() { $posts = $this->Post->find('all'); $this->set(compact('posts')); } } ?> Create a folder named posts in your app/views folder, and then create the index view in a file named index.ctp and place it in your app/views/posts folder, with the following contents: <h1>Posts</h1> <?php if (!empty($posts)) { ?> <ul> <?php foreach($posts as $post) { ?> <li><?php echo $this->Html->link( $post['Post']['title'], array( 'action'=>'view', $post['Post']['id'] ) ); ?></li> <?php } ?> </ul> <?php } ?> How to do it... Edit your app/config/routes.php file and add the following statement at the end: Router::parseExtensions('rss'); Edit your app/controllers/posts_controller.php file and add the following property to the PostsController class: public $components = array('RequestHandler'); While still editing PostsController, make the following changes to the index() method: public function index() { $options = array(); if ($this->RequestHandler->isRss()) { $options = array_merge($options, array( 'order' => array('Post.created' => 'desc'), 'limit' => 5 )); } $posts = $this->Post->find('all', $options); $this->set(compact('posts')); } Create a folder named rss in your app/views/posts folder, and inside the rss folder create a file named index.ctp, with the following contents: <?php $this->set('channel', array( 'title' => 'Recent posts', 'link' => $this->Rss->url('/', true), 'description' => 'Latest posts in my site' )); $items = array(); foreach($posts as $post) { $items[] = array( 'title' => $post['Post']['title'], 'link' => array('action'=>'view', $post['Post']['id']), 'description' => array('cdata'=>true, 'value'=>$post['Post'] ['body']), 'pubDate' => $post['Post']['created'] ); } echo $this->Rss->items($items); ?> Edit your app/views/posts/index.ctp file and add the following at the end of the view: <?php echo $this->Html->link('Feed', array('action'=>'index', 'ext'=>'rss')); ?> If you now browse to http://localhost/posts, you should see a listing of posts with a link entitled Feed. Clicking on this link should produce a valid RSS feed, as shown in the following screenshot: If you view the source of the generated response, you can see that the source for the first item within the RSS document is: <item> <title>Understanding Containable</title> <link>http://rss.cookbook7.kramer/posts/view/1</link> <description><![CDATA[Post body]]></description> <pubDate>Fri, 20 Aug 2010 18:55:47 -0300</pubDate> <guid>http://rss.cookbook7.kramer/posts/view/1</guid> </item> How it works... We started by telling CakePHP that our application accepts the rss extension with a call to Router::parseExtensions(), a method that accepts any number of extensions. Using extensions, we can create different versions of the same view. For example, if we wanted to accept both rss and xml as extensions, we would do: Router::parseExtensions('rss', 'xml'); In our recipe, we added rss to the list of valid extensions. That way, if an action is accessed using that extension, for example, by using the URL http://localhost/posts.rss, then CakePHP will identify rss as a valid extension, and will execute the ArticlesController::index() action as it normally would, but using the app/views/posts/rss/index.ctp file to render the view. The process also uses the file app/views/layouts/rss/default.ctp as its layout, or CakePHP's default RSS layout if that file is not present. We then modify how ArticlesController::index() builds the list of posts, and use the RequestHandler component to see if the current request uses the rss extension. If so, we use that knowledge to change the number and order of posts. In the app/views/posts/rss/index.ctp view, we start by setting some view variables. Because a controller view is always rendered before the layout, we can add or change view variables from the view file, and have them available in the layout. CakePHP's default RSS layout uses a $channel view variable to describe the RSS feed. Using that variable, we set our feed's title, link, and description. We proceed to output the actual item files. There are different ways to do so, the first one is making a call to the RssHelper::item() method for each item, and the other one requires only a call to RssHelper::items(), passing it an array of items. We chose the latter method due to its simplicity. While we build the array of items to be included in the feed, we only specify title, link, description, and pubDate. Looking at the generated XML source for the item, we can infer that the RssHelper used our value for the link element as the value for the guid (globally unique identifier) element. Note that the description field is specified slightly differently than the values for the other fields in our item array. This is because our description may contain HTML code, so we want to make sure that the generated document is still a valid XML document. By using the array notation for the description field, a notation that uses the value index to specify the actual value on the field, and by setting cdata to true, we are telling the RssHelper (actually the XmlHelper from which RssHelper descends) that the field should be wrapped in a section that should not be parsed as part of the XML document, denoted between a <![CDATA[ prefix and a ]]> postfix. The final task in this recipe is adding a link to our feed that is shown in the index.ctp view file. While creating this link, we set the special ext URL setting to rss. This sets the extension for the generated link, which ends up being http://localhost/posts.rss.  
Read more
  • 0
  • 0
  • 9377

article-image-spotfire-architecture-overview
Packt
21 Oct 2013
6 min read
Save for later

The Spotfire Architecture Overview

Packt
21 Oct 2013
6 min read
(For more resources related to this topic, see here.) The companies of today face innumerable market challenges due to the ferocious competition of a globalized economy. Hence, providing excellent service and having customer loyalty are the priorities for their survival. In order to best achieve both goals and have a competitive edge, companies can resort to the information generated by their digitalized systems, their IT. All the recorded events from Human Resources (HR) to Customer Relationship Management (CRM), Billing, and so on, can be leveraged to better understand the health of a business. The purpose of this article is to present a tool that behaves as a digital event analysis enabler, the TIBCO Spotfire platform. In this article, we will list the main characteristics of this platform, while also presenting its architecture and describing its components. TIBCO Spotfire Spotfire is a visual analytics and business intelligence platform from TIBCO software. It is a part of new breed of tools created to bridge the gap between the massive amount of data that the corporations produce today and the business users who need to interpret this data in order to have the best foundation for the decisions they make. In my opinion, there is no better description of what TIBCO Spotfire delivers than the definition of visual analytics made in the publication named Illuminating the Path: The Research and Development Agenda for Visual Analytics. "Visual analytics is the science of analytical reasoning facilitated by interactive visual interfaces". –Illuminating the Path: The Research and Development Agenda for Visual Analytics, James J. Thomas and Kristin A. Cook, IEEE Computer Society Press The TIBCO Spotfire platform offers the possibility of creating very powerful (yet easy to interpret and interact) data visualizations. From real-time Business Activity Monitoring (BAM) to Big Data, data-based decision making becomes easy – the what, the why, and the how becomes evident. Spotfire definitely allowed TIBCO to establish itself in an area where until recently it had very little experience and no sought-after products. The main features of this platform are: More than 30 different data sources to choose from: Several databases (including Big Data Teradata), web services, files, and legacy applications. Big Data analysis: Spotfire delivers the power of MapReduce to regular users. Database analysis: Data visualizations can be built on top of databases using information links. There is no need to pull the analyzed data into the platform, as a live link is maintained between Spotfire and the database. Visual join: Capability of merging data from several distinct sources into a single visualization. Rule-based visualizations: The platform enables the creation and tailoring of rules, and the filtering of data. These features facilitate emphasizing of outliers and foster management by exception. It is also possible to highlight other important features, such as commonalities and anomalies. Data drill-down: For data visualizations it is possible to create one (or many) details visualization(s). This drill-down can be performed in multiple steps as drill-down of drill-down of drill-down and so on. Real-time integration with other TIBCO tools. Spotfire platform 5.x The platform is composed of several intercommunicating components, each one with its own responsibilities (with clear separation of concerns) enabling a clustered deployment. As this is an introductory article, we will not dive deep into all the components, but we will identify the main ones and the underlying architecture. A depiction the platform's components is shown in the following diagram: The descriptions of each of the components in the preceding diagram are as follows: TIBCO Spotfire Server: The Spotfire server makes a set of services available to the analytics clients (TIBCO Spotfire Professional and TIBCO Spotfire Web Player Server): User services: It is responsible for authentication and authorization Deployment services: It handles the consistent upgrade of Spotfire clients Library services: It manages the repository of analysis files Information services: It persists information links to external data sources The Server component is available for several operating systems, such as Linux, Solaris, and Windows. TIBCO Spotfire Professional: This is a client application (fat client) that focuses on the creation of data visualizations, taking advantage of all of the platform's features. This is the main client application, and because of that, it has enabled all the data manipulation functionalities such as use of data filters, drill down, working online and offline (working offline allows embedding data in the visualizations for use in limited connectivity environments), and exporting visualizations to MS PowerPoint, PDF, and HTML. It is only available for Windows environment. TIBCO Spotfire Web Player Server: This offers users the possibility of accessing and interacting with visualizations created in TIBCO Spotfire Professional. The existence of this web application enables the usage of an Internet browser as a client, allowing for thin client access where no software has to be installed on the user's machine. Please be aware that the visualizations cannot be created or altered this way. They can only be accessed in a read-only mode, where all rules are enabled, as well as data is drill down. Since it is developed in ASP.NET, this server must be deployed in a Microsoft IIS server, and so it is restricted to Microsoft Windows environments. Server Database: This database is accessed by the TIBCO Spotfire Server for storage of server information. It should not be confused with the data stores that the platform can access to fetch data from, and build visualizations. Only two vendor databases are supported for this role: Oracle Database and Microsoft SQL Server. TIBCO Spotfire Web Player Client: These are thin clients to the Web Player Server. Several Internet browsers can be used on various operating systems (Microsoft Internet Explorer on Windows, Mozilla Firefox on Windows and Mac OS, Google Chrome on Windows and Android, and so on). TIBCO has also made available an application for iPad, which is available in iTunes. For more details on the iPad client application, please navigate to: https://itunes.apple.com/en/app/spotfire-analytics/id417436823?mt=8 Summary In this article, we introduced the main attributes of the Spotfire platform in the scope of visual analytics, and we detailed the platform's underlying architecture. Resources for Article: Further resources on this subject: Core Data iOS: Designing a Data Model and Building Data Objects [Article] Database/Data Model Round-Trip Engineering with MySQL [Article] Drilling Back to Source Data in Dynamics GP 2013 using Dashboards [Article]
Read more
  • 0
  • 0
  • 9365

article-image-vim-72-formatting-code
Packt
30 Apr 2010
11 min read
Save for later

Vim 7.2 Formatting Code

Packt
30 Apr 2010
11 min read
Formatting code often depends on many different things. Each programming language has its own syntax, and some languages rely on formatting like indentation more than others. In some cases, the programmer is following style guidelines given by an employer so that code can follow the company-wide style. So, how should Vim know how you want your code to be formatted? The short answer is that it shouldn't! But by being flexible, Vim can let you set up exactly how you want your formatting done. However, the fact is that even though formatting differs, most styles of formatting follow the same basic rules. This means that in reality, you only have to change the things that differ. In most cases, the changes can be handled by changing a range of settings in Vim. Among these, there are a few especially worth mentioning: Formatoptions: This setting holds formatting-specific settings (see :help 'fo') Comments: What are comments and how they should be formatted (see :help 'co') (no)expandtab: Convert tabs to spaces (see :help 'expandtab') Softtabstop: How many spaces a single tab is converted to (see :help 'sts') Tabstop: How many spaces a tab looks like (see :help 'ts') With these options, you can set nearly every aspect of how Vim will indent your code, and whether it should use spaces or tabs for indentation. But this is not enough because you still have to tell Vim if it should actually try to do the indentation for you, or if you want to do it manually. It you want Vim to do the indentation for you, you have the choice between four different ways for Vim to do it. In the following sections, we will look at the options you can set to interact with the way Vim indents code. Autoindent Autoindent is the simplest way of getting Vim to indent your code. It simply stays at the same indentation level as the previous line. So, if the current line is indented with four spaces, then the new line you add by pressing Enter will automatically be indented with four spaces too. It is then up to you as to how and when the indentation level needs to change again. This type of indentation is particularly good for languages where the indentation stays the same for several lines in a row. You get autoindent by using :set, autoindent, or :set ai. Smartindent Smartindent is the next step when you want a smarter indent than autoindent. It still gives you the indentation from the previous line, but you don't have to change the indentation level yourself. Smartindent recognizes the most common structures from the C programming language and uses this as a marker for when to add / remove the indentation levels. As many languages are loosely based on the same syntax as C, this will work for those languages as well. You get smart indent by using any of the following commands: :set smartindent :set si Cindent Cindent is often called clever indent or configurable indent because it is more configurable than the previous two indentation methods. You have access to three different setup options: cinkeys This option contains a comma-separated list of keys that Vim should use to change the indentation level. An example could be: :set cinkeys="0{,0},0#,:", which means that it should reindent whenever it hits a {, a } or a # as the first character on the line, or if you use : as the last character on the line (as used in switch constructs in many languages).The default value for cinkeys is "0{, 0}, 0), :, 0#, !^F, o, O, and e". See :help cinkeys for more information on what else you can set in this option. cinoptions This option contains all the special options you can set specifically for cindent. A large range of options can be set in this comma-separated list. An example could be:set cinoptions=">2,{3,}3", which means that we want Vim to add two extra spaces to the normal indent length, and we want to place { and } three spaces as compared to the previous line. So, if we have a normal indent to be four spaces, then the previous example could result in the code looking like this (dot marks represent a space): if( a == b) ...{ ......print "hello"; ...} The default value for cinoptions is this quite long string: ">s,e0,n0,f0,{0,}0,^0,:s,=s,l0,b0,gs,hs,ps,ts,is,+s,c3,C0,/0,(2s,us,U0,w0,W0,m0,j0,)20,*30" . See :help 'cinoptions' for more information on all the options. cinwords This option contains all the special keywords that will make Vim add indentation on the next line. An example could be: :set cinwords="if,else,do,while,for,switch", which is also the default value for this option. See :help 'cinwords' for more information. Indentexpr Indentexpr is the most flexible indent option to use, but also the most complex. When used, indentexpr evaluates an expression to compute the indent of a line. Hence, you have to write an expression that Vim can evaluate. You can activate this option by simply setting it to a specific expression such as: :set indentexpr=MyIndenter() Here, MyIndenter() is a function that computes the indentation for the lines it is executed on. A very simple example could be a function that emulates the autoindent option: function! MyIndenter() " Find previous line and get its indentation let prev_lineno = s:prevnonblank(v:lnum) let ind = indent( prev_lineno ) return indendfunction Adding just a bit more functionality than this, the complexity increases quite fast. Vim comes with a lot of different indent expressions for many programming languages. These can serve as inspiration if you want to write your own indent expression. You can find them in the indent folder in your VIMHOME. You can read more about how to use indentexpr in :help 'indentexpr' and :help 'indent-expression'. Fast code-block formatting After you have configured your code formatting, you might want to update your code to follow these settings. To do so, you simply have to tell Vim that it should reindent every single line in the file from the first line to the last. This can be done with the following Vim command: 1G=G If we split it up, it simply says: 1G: Go to the first line of the file (alternatively you can use gg) =: Equalize lines; in other words, indent according to formatting configuration G: Go to the last line in the file (tells Vim where to end indenting) You could easily map this command to a key in order to make it easily accessible: :nmap <F11> 1G=G:imap <F11> <ESC>1G=Ga The last a is to get back into the insert mode as this was where we originally were. So, now you can just press the F11key in order to reindent the entire buffer correctly. Note that if you have a programmatic error, for example, missing a semicolon at the end of a line in a C program, the file will not be correctly indented from that point on in the buffer. This can sometimes be useful to identify where a scope is not closed correctly (for example, a { not closed with a } ). Sometimes, you might just want to format smaller blocks of code. In those cases, you typically have two options—use the natural scope blocks in the code, or select a block of code in the visual mode and indent it. The last one is simple. Go into the visual mode with, for example,Shift+v and then press = to reindent the lines. When it comes to using code blocks on the other hand, there are several different ways to do it. In Vim, there are multiple ways to select a block of code. So in order to combine a command that indents a code block, we need to look at the different types and the commands to select them: i{ Inner block, which means everything between { and } excluding the brackets. This can also be selected with i} and iB. a{ A block, which means all the code between { and } including the brackets. This can also be selected with a} and aB. i( Inner parenthesis, meaning everything between ( and ) excluding the parentheses. Can also be selected with i) and ib. a( A parentheses, meaning everything between ( and ) including the parenthesis. Can also be selected with a) and ab. i< Inner < > block, meaning everything between < and > excluding the brackets. Can also be selected with i>. a< A < > block, meaning everything between < and > including the brackets. Can also be selected with a>. i[ Inner [ ] block, meaning everything between [ and ] excluding the square brackets. Can also be selected with i]. a[ A [ ] block, meaning everything between [ and ], including the square brackets. This can also be selected with a]. So, we have defined what Vim sees a block of code as; now, we simply have to tell it what to do with the block. In our case, we want to reindent the code. We already know that = can do this. So, an example of a code block reindentation could look like this: =i{ Let's execute the code block reindentation in the following code (| being the place where the cursor is): if( a == b ) { print |"a equals b"; } This would produce the following code (with default C format settings): if( a == b ) { print |"a equals b"; } If, on the other hand, we choose to use a { as the block we are working on, then the resulting code would look like this: if( a == b ) { print "a equals b"; } As you can see in the last piece of code, the =a{ command corrects the indentation of both the brackets and the print line. In some cases where you work in a code block with multiple levels of code blocks, you might want to reindent the current block and maybe the surrounding one. No worries, Vim has a fast way to do this. If, for instance, you want to reindent the current code block and besides that want to reindent the block that surrounds it, you simply have to execute the following command while the cursor is placed in the innermost block: =2i{ This simply tells Vim that you will equalize / reindent two levels of inner blocks counting from the "active" block and out. You can replace the number 2 with any number of levels of code blocks you want to reindent. Of course, you can also swap the inner block command with any of the other block commands, and that way select exactly what you want to reindent. So, this is really all it takes to get your code to indent according to the setup you have. Auto format pasted code The trend among programmers tells us that we tend to reuse parts of our code, or so-called patterns. This could mean that you have to do a lot of copying and pasting of code. Most users of Vim have experienced what is often referred to as the stair effect when pasting code into a file. This effect occurs when Vim tries to indent the code as it inserts it. This often results in each new line to be indented to another level, and you ending up with a stair: code line 1 code line 2 codeline 3 code line 4 ... The normal workaround for this is to go into the paste-mode in Vim, which is done by using: :set paste After pasting your code, you can now go back to your normal insert mode again: :set nopaste But what if there was another workaround? What if Vim could automatically indent the pasted code such that it is indented according to the rest of the code in the file? Vim can do that for you with a simple paste command. p=`] This command simply combines the normal paste command (p) with a command that indents the previously inserted lines (=`]). It actually relies on the fact that when you paste with p (lowercase), the cursor stays on the first character of the pasted text. This is combined with `], which takes you to the last character of the latest inserted text and gives you a motion across the pasted text from the first line to the last. So, all you have to do now is map this command to a key and then use this key whenever you paste a piece of code into your file. Using external formatting tools Even though experienced Vim users often say that Vim can do everything, this is of course not the truth—but is close. For those things that Vim can't do, it is smart enough to be able to use external tools. In the following sections, we will take a look at some of the most used external tools that can be used for formatting your code, and how to use them.
Read more
  • 0
  • 0
  • 9219

article-image-configuring-and-deploying-ejb-30-entity-weblogic-server
Packt
27 Aug 2010
8 min read
Save for later

Configuring and Deploying the EJB 3.0 Entity in WebLogic Server

Packt
27 Aug 2010
8 min read
(For more resources on Oracle, see here.) Creating a Persistence Configuration file An EJB 3.0 entity bean is required to have a persistence.xml configuration file, which defines the database persistence properties. A persistence.xml file gets added to the META-INF folder when a JPA project is defined. Copy the following listing to the persistence.xml file in Eclipse: <?xml version="1.0" encoding="UTF-8" ?> <persistence xsi_schemaLocation= "http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd" version="1.0"> <persistence-unit name="em"> <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider> <jta-data-source>jdbc/MySQLDS</jta-data-source> <class>ejb3.Catalog</class> <properties> <property name="eclipselink.target-server" value="WebLogic_10" /> <property name="javax.persistence.jtaDataSource" value="jdbc/ MySQLDS" /> <property name="eclipselink.ddl-generation" value="create-tables" /> <property name="eclipselink.target-database" value="MySQL" /> </properties> </persistence-unit> </persistence> The persistence-unit is required to be named and may be given any name. We had configured a JDBC data source with JNDI jdbc/MySQLDS in WebLogic Server. Specify the JNDI name in the jta-data-source element. The properties element specifies vendor-specific properties. The eclipselink.ddl-generation property is set to create-tables, which implies that the required database tables will be created unless they are already created . The persistence.xml configuration file is shown in the Eclipse project in the following illustration: (Move the mouse over the image to enlarge.) Creating a session bean For better performance, one of the best practices in developing EJBs is to access entity beans from session beans. Wrapping an entity bean with a session bean reduces the number of remote method calls as a session bean may invoke an entity bean locally. If a client accesses an entity bean directly, each method invocation is a remote method call and incurs an overhead of additional network resources. We shall use a stateless session bean, which consumes less resources than a stateful session bean, to invoke entity bean methods. In this section, we create a session bean in Eclipse. A stateless session bean class is just a Java class annotated with the @Stateless annotation. Therefore, we create Java classes for the session bean and session bean remote interface in Eclipse. To create a Java class, select File | New. In the New window, select Java | Class and click on Next> In the New Java Class window, select the Source folder as EJB3JPA/src, EJB3JPA being the project name. Specify Class Name as CatalogTestBean and click on Finish. Similarly, create a CatalogTestBeanRemote interface by selecting Java | Interface in the New window. The session bean class and the remote interface get added to the EJB3JPA project. The session bean class The stateless session bean class, CatalogTestBean implements the CatalogTestRemote interface. We shall use the EntityManager API to create, find, query, and remove entity instances. Inject an EntityManager using the @PersistenceContext annotation. Specify unitName as the same as the persistence-unit name in the persistence.xml configuration file: @PersistenceContext(unitName = "em") EntityManager em; Next, create a test() method, which we shall invoke from a test client. In the test() method we shall create and persist entity instances, query an entity instance, and delete an entity instance, all using an EntityManager object, which we had injected earlier in the session bean class. Injecting an EntityManager implies that an instance of EntityManager is made available to the session bean. Create an instance of the Entity bean class: Catalog catalog = new Catalog(new Integer(1), "Oracle Magazine", "Oracle Publishing", "September-October 2009", "Put Your Arrays in a Bind","Mark Williams"); Persist the entity instance to the database using the persist() method: em.persist(catalog); Similarly, persist two more entity instances. Next, create a query using the createQuery() method of the EntityManager object. The query string may be specified as a EJB-QL query. Unlike HQL, the SELECT clause is not optional in EJB-QL. Execute the query and return the query result as a List using the getResultList() method. As an example, select the catalog entry corresponding to author David Baum. The FROM clause of a query is directed towards the mapped entity bean class, not the underlying database. List catalogEntry =em.createQuery("SELECT c from Catalog c where c.author=:name").setParameter("name","David Baum"). getResultList(); Iterate over the result list to output the properties of the entity instance: for (Iterator iter = catalogEntry.iterator(); iter.hasNext(); ) { Catalog element = (Catalog)iter.next(); retValue =retValue + "<br/>" + element.getJournal() + "<br/>" + element.getPublisher() +"<br/>" + element.getDate() + "<br/>" + element.getTitle() + "<br/>" + element.getAuthor() +"<br/>"; } The variable retValue is a String that is returned by the test() method. Similarly, create and run a EJB-QL query to return all titles in the Catalog database: List allTitles =em.createQuery("SELECT c from Catalog c"). getResultList(); An entity instance may be removed using the remove() method: em.remove(catalog2); The corresponding database row gets deleted from the Catalog table. Subsequently, create and run a query to list all the entity instances mapped to the database. The session bean class, CatalogTestBean, is listed next: package ejb3; import java.util.Iterator; import java.util.List; import javax.ejb.Stateless; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; /** * Session Bean implementation class CatalogTestBean */ @Stateless(mappedName = "EJB3-SessionEJB") public class CatalogTestBean implements CatalogTestBeanRemote { @PersistenceContext(unitName = "em") EntityManager em; /** * Default constructor. */ public CatalogTestBean() { // TODO Auto-generated constructor stub } public String test() { Catalog catalog = new Catalog(new Integer(1), "Oracle Magazine", "Oracle Publishing", "September-October 2009", "Put Your Arrays in a Bind","Mark Williams"); em.persist(catalog); Catalog catalog2 = new Catalog(new Integer(2), "Oracle Magazine", "Oracle Publishing", "September-October 2009", "Oracle Fusion Middleware 11g: The Foundation for Innovation", "David Baum"); em.persist(catalog2); Catalog catalog3 = new Catalog(new Integer(3), "Oracle Magazine", "Oracle Publishing", "September-October 2009", "Integrating Information","David Baum"); em.persist(catalog3); String retValue = "<b>Catalog Entries: </b>"; List catalogEntry = em.createQuery("SELECT c from Catalog c where c.author=:name").setParameter("name", "David Baum").getResultList(); for (Iterator iter = catalogEntry.iterator(); iter.hasNext(); ) { Catalog element = (Catalog)iter.next(); retValue = retValue + "<br/>" + element.getJournal() + "<br/>" + element.getPublisher() + "<br/>" + element.getDate() + "<br/>" + element.getTitle() + "<br/>" + element.getAuthor() + "<br/>"; } retValue = retValue + "<b>All Titles: </b>"; List allTitles = em.createQuery("SELECT c from Catalog c").getResultList(); for (Iterator iter = allTitles.iterator(); iter.hasNext(); ) { Catalog element = (Catalog)iter.next(); retValue = retValue + "<br/>" + element.getTitle() + "<br/>"; } em.remove(catalog2); ); retValue = retValue + "<b>All Entries after removing an entry: </b>"; List allCatalogEntries = em.createQuery("SELECT c from Catalog c"). getResultList(); for (Iterator iter = allCatalogEntries.iterator(); iter.hasNext(); ) { Catalog element = (Catalog)iter.next(); retValue = retValue + "<br/>" + element + "<br/>"; } return retValue; } } We also need to add a remote or a local interface for the session bean: package ejb3; import javax.ejb.Remote; @Remote public interface CatalogTestBeanRemote { public String test(); } The session bean class and the remote interface are shown next: We shall be packaging the entity bean and the session bean in a EJB JAR file, and packaging the JAR file with a WAR file for the EJB 3.0 client into an EAR file as shown next: EAR File | | |-WAR File | |-EJB 3.0 Client |-JAR File | |-EJB 3.0 Entity Bean EJB 3.0 Session Bean Next, we create an application.xml for the EAR file. Create a META-INF folder for the application.xml. Right-click on the EJB3JPA project in Project Explorer and select New>Folder. In the New Folder window, select the EJB3JPA folder and specify the new Folder name as META-INF. Click on Finish. Right-click on the META-INF folder and select New | Other. In the New window, select XML | XML and click on Next. In the New XML File window, select the META-INF folder and specify File name as application.xml. Click on Next. Click on Finish. An application.xml file gets created. Copy the following listing to application.xml: <?xml version = '1.0' encoding = 'windows-1252'?> <application> <display-name></display-name> <module> <ejb>ejb3.jar</ejb> </module> <module> <web> <web-uri>weblogic.war</web-uri> <context-root>weblogic</context-root> </web> </module> </application> The application.xml in the Project Explorer is shown next:
Read more
  • 0
  • 3
  • 9206
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £16.99/month. Cancel anytime
article-image-apache-wicket-displaying-data-using-datatable
Packt
01 Apr 2011
6 min read
Save for later

Apache Wicket: displaying data using DataTable

Packt
01 Apr 2011
6 min read
It's hard to find a web application that does not have a single table that presents the user with some data. Building these DataTables, although not very difficult, can be a daunting task because each of these tables must often support paging, sorting, filtering, and so on. Wicket ships with a very powerful component called the DataTable that makes implementing all these features simple and elegant. Because Wicket is component-oriented, once implemented, these features can be easily reused across multiple DataTable deployments. In this article, we will see how to implement the features mentioned previously using the DataTable and the infrastructure it provides. Sorting A common requirement, when displaying tabular data, is to allow users to sort it by clicking the table headers. Click a header once and the data is sorted on that column in ascending order; click it again, and the data is sorted in the descending order. In this recipe, we will see how to implement such a behavior when displaying data using a DataTable component. We will build a simple table that will look much like a phone book and will allow the sorting of data on the name and e-mail columns: Getting ready Begin by creating a page that will list contacts using the DataTable, but without sorting: Create Contact bean: Contact.java public class Contact implements Serializable { public String name, email, phone; // getters, setters, constructors2. Create the page that will list the contacts: HomePage.html <html> <body> <table wicket_id="contacts" class="contacts"></table> </body> </html> HomePage.java public class HomePage extends WebPage { private static List<Contact> contacts = Arrays.asList( new Contact("Homer Simpson", "homer@fox.com", "555-1211"), new Contact("Charles Burns", "cmb@fox.com", "555-5322"), new Contact("Ned Flanders", "green@fox.com", "555-9732")); public HomePage(final PageParameters parameters) { // sample code adds a DataTable and a data providert hat uses the contacts list created above } } How to do it... Enable sorting by letting DataTable columns know they can be sorted by using a constructor that takes the sort data parameter: HomePage.java List<IColumn<Contact>> columns = new ArrayList<IColumn<Contact>>(); columns.add(new PropertyColumn<Contact>(Model.of("Name"), "name","name")); columns.add(new PropertyColumn<Contact>(Model.of("Email"), "email", "email")); columns.add(new PropertyColumn<Contact>(Model.of("Phone"), "phone")); Implement sorting by modifying the data provider: private static class ContactsProvider extends SortableDataProvider<Contact> { public ContactsProvider() { setSort("name", true); } public Iterator<? extends Contact> iterator(int first, int count) { List<Contact> data = new ArrayList<Contact>(contacts); Collections.sort(data, new Comparator<Contact>() { public int compare(Contact o1, Contact o2) { int dir = getSort().isAscending() ? 1 : -1; if ("name".equals(getSort().getProperty())) { return dir * (o1.name.compareTo(o2.name)); } else { return dir * (o1.email.compareTo(o2.email)); } } }); return data.subList(first, Math.min(first + count, data.size())).iterator(); } public int size() { return contacts.size(); } public IModel<Contact> model(Contact object) { return Model.of(object); } } How it works... DataTable supports sorting out of the box. Any column with the IColumn#getSortProperty() method that returns a non-null value is treated as a sortable column and Wicket makes its header clickable. When a header of a sortable column is clicked Wicket will pass the value of IColumn#getSortProperty to the data provider which should use this value to sort the data. In order to know about the sorting information the data provider must implement the ISortableDataProvider interface; Wicket provides the default SortableDataProvider implementation which is commonly used to implement sort-capable data providers. DataTable will take care of details such as multiple clicks to the same column resulting in change of sorting direction, so on. Let's examine how to implement sorting in practice. In step 1 and 2, we have implemented a basic DataTable that cannot yet sort data. Even though the data provider we have implemented already extends a SortableDataProvider, it does not yet take advantage of any sort information that may be passed to it. We start building support for sorting by enabling it on the columns, in our case the name and the email columns: List<IColumn<Contact>> columns = new ArrayList<IColumn<Contact>>(); columns.add(new PropertyColumn<Contact>(Model.of("Name"), "name", "name")); columns.add(new PropertyColumn<Contact>(Model.of("Email"), "email", "email")); columns.add(new PropertyColumn<Contact>(Model.of("Phone"), "phone")); We enable sorting on the columns by using the three-argument constructor of the PropertyColumn, with the second argument being the "sort data". Whenever a DataTable column with sorting enabled is clicked, the data provider will be given the value of the "sort data". In the example, only the name and e-mail columns have sorting enabled with the sort data defined as a string with values "name" and "e-mail" respectively. Now, let's implement sorting by making our data provider implementation sort-aware. Since our data provider already extends a provider that implements ISortableDataProvider we only need to take advantage of the sort information: public Iterator<? extends Contact> iterator(int first, int count) { List<Contact> data = new ArrayList<Contact>(contacts); Collections.sort(data, new Comparator<Contact>() { public int compare(Contact o1, Contact o2) { int dir = getSort().isAscending() ? 1 : -1; if ("name".equals(getSort().getProperty())) { return dir * (o1.name.compareTo(o2.name)); } else { return dir * (o1.email.compareTo(o2.email)); } } }); return data.subList(first, Math.min(first + count, data.size())).iterator(); } First we copy the data into a new list which we can sort as needed and then we sort based on the sort data and direction provided. The value returned by getSort().getProperty() is the same sort data values we have defined previously when creating columns. The only remaining task is to define a default sort which will be used when the table is rendered before the user clicks any header of a sortable column. We do this in the constructor of our data provider: public ContactsProvider() { setSort("name", true); } There's more... DataTable gives us a lot out of the box; in this section we see how to add some usability enhancements. Adding sort direction indicators via CSS DataTable is nice enough to decorate sortable <th> elements with sort-related CSS classes out of the box. This makes it trivial to implement sort direction indicators as shown in the following screenshot: A possible CSS style definition can look like this: table tr th { background-position: right; background-repeat:no-repeat; } table tr th.wicket_orderDown { background-image: url(images/arrow_down.png); } table tr th.wicket_orderUp { background-image: url(images/arrow_up.png); } table tr th.wicket_orderNone { background-image: url(images/arrow_off.png);
Read more
  • 0
  • 1
  • 9181

article-image-social-media-magento
Packt
06 Oct 2009
4 min read
Save for later

Social Media in Magento

Packt
06 Oct 2009
4 min read
Integrating Twitter with Magento Twitter (http://twitter.com) is a micro-blogging service, which allows its users to send short messages to their followers, answering the question "what are you doing now?" After registering a Twitter account, you can begin to follow other Twitter users. When they update their status, you will see it in your timeline of what people you follow say. When you sign up for a Twitter account, it is usually best to sign up as the name of your store—for example, "Cheesy Cheese Store" rather than "RichardCarter", simply because your customers are more likely to search for the name of the store rather than your own name. Tweeting: Ideas for your store's tweets If you look at other businesses on Twitter, you'll see that there are a number of ways to promote your store on Twitter without losing followers by being too "spammy". Some companies give voucher codes to Twitter followers—a good way to entice new customers Others use Twitter to host competitions for free items—a good way to reward existing customers You can also release products to your Twitter followers before releasing them to other customers Displaying your Twitter updates on your Magento store Twitter can be a powerful tool for your store on its own, but you can integrate Twitter with your Magento store to drive existing customers to your Twitter account, which can help to generate repeat customers. There are a few ways Twitter can be used with Magento, the most versatile of which is the LazzyMonks Twitter module. Installing the LazzyMonks Twitter module To install the LazzyMonks module, visit its page on the Magento Commerce web site (http://www.magentocommerce.com/extension/482/lazzymonks-twitter, and retrieve the extension key, after agreeing to the terms and conditions. Log in to your Magento store's administration panel, and open the Magento Connect Manager in the Magento Connect option under the System tab. Once this has loaded, paste the extension key in to the text box next to the Paste extension key to install label, as shown in the following screenshot: This will install the module for you. Return to your Magento store's administration panel, and you will see a Twitter option in the navigation. The View Tweets option allows you to view updates made to your Twitter account. The Post Update option allows you to update Twitter from your store's administration panel. Firstly, you will need to configure the module's settings, which can be found under the Twitter option of the Configuration section of your store's administration panel, under the System tab. The Twitter Login options are of particular interest. Here you will need to enter your Twitter account's username and password. Once this has been saved, you can post a status update to your Twitter account through Magento's administration panel: This then appears on your Twitter account: Your tweets will also be displayed on your store, as a block beneath other content and can be styled with CSS in your Magento theme by addressing div.twitter. Other ways to integrate Twitter with Magento The other way to integrate your Twitter feed with Magento is by embedding Twitter's widgets into your site. To use these, log in to your Twitter account, and go to http://twitter.com/widgets. You can then use the HTML provided within the Magento templates to insert your Twitter updates into your store. Adding your Twitter feed through Magento's CMS Alternatively, you can insert your Twitter account's updates into any page managed through Magento's Content Management System. In Magento's administration panel, select CMS | Manage Pages, and select the page that you want your Twitter stream to appear in. Within your page, simply paste the code that Twitter produces when you select the type of Twitter "badge", which you want to display on your store. Consider creating a new block for your Twitter statuses, so that it can be removed from pages where it is likely to be distracting (for example, the checkout page).
Read more
  • 0
  • 0
  • 9129

article-image-exporting-data-ms-access-2003-mysql
Packt
07 Oct 2009
4 min read
Save for later

Exporting data from MS Access 2003 to MySQL

Packt
07 Oct 2009
4 min read
Introduction It is assumed that you have a working copy of MySQL which you can use to work with this article. The MySQL version used in this article came with the XAMPP download. XAMPP is an easy to install (and use) Apache distribution containing MySQL, PHP, and Perl. The distribution used in this article is XAMPP for Windows. You can find documentation here. Here is a screen shot of the XAMPP control panel where you can turn the services on and off and carry out other administrative tasks. You need to follow the steps indicated here: Create a database in MySQL to which you will export a table from Microsoft Access 2003 Create a ODBC DSN that helps you connecting Microsoft Access to MySQL Export the table or tables Verify the exported items Creating a database in MySQL You can create a database in MySQL by using the command 'Create Database' in MySQL or using a suitable graphic user interface such as MySQL workbench. You will have to refer to documentation that works with your version of MySQL. Herein the following version was used. The next listing shows how a database named TestMove was created in MySQL starting from the bin folder of the MySQL program folder. Follow the commands and the response from the computer. The Listing 1 and the folders are appropriate for my computer and you may find it in your installation directory. The databases you will be seeing will be different from what you see here except for those created by the installation. Listing 1: Login and create a database Microsoft Windows XP [Version 5.1.2600](C) Copyright 1985-2001 Microsoft Corp.C:Documents and SettingsJayaram Krishnaswamy>cdC:>cd xamppmysqlbinC:xamppmysqlbin>mysql -h localhost -u root -pEnter password: *********Welcome to the MySQL monitor. Commands end with ; or g.Your MySQL connection id is 2Server version: 5.1.30-community MySQL Community Server (GPL)Type 'help;' or 'h' for help. Type 'c' to clear the buffer.mysql> show databases;+--------------------+| Database |+--------------------+| information_schema || cdcol || expacc || mengerie || mydb || mysql || phpmyadmin || test || testdemo || webauth |+--------------------+10 rows in set (0.23 sec)mysql> create database TestMove;Query OK, 1 row affected (0.00 sec)mysql> show databases;+--------------------+| Database |+--------------------+| information_schema || cdcol || expacc || mengerie || mydb || mysql || phpmyadmin || test || testdemo || testmove || webauth |+--------------------+11 rows in set (0.00 sec)mysql> The login detail that works error free is shown. The preference for host name is localhost v/s either the Machine Name (in this case Hodentek2) or the IP address. The first 'Show Databases' command does not display the TestMove we created which you can see in response to the 2nd Show Databases command. In windows the commands are not case sensitive. Creating an ODBC DSN to connect to MySQL When you install from XAMPP you will also be installing an ODBC driver for MySQL for the version of MySQL included in the bundle. In the MySQL version used for this article the version is MySQL ODBC5.1 and the file name is MyODBC5.dll. Click Start | Control Panel | Administrative Tools | Data Sources (ODBC) and open the ODBC Data Source Administrator window as shown. The default tab is User DSN. Change to System DSN as shown here. Click the Add... button to open the Create New Data Source window as shown. Scroll down and choose MySQL ODBC 5.1 Driver as the driver and click Finish. The MySQL Connector/ODBC Data Source Configuration window shows up. You will have to provide a Data Source Name (DSN) and a description. The server is the localhost. You must have your User Name/Password information to proceed further. The database is the name of the database you created earlier (TestMove) and this should show up in the drop-down list if the rest of the information is correct. Accept the default port. If all information is correct the Test button gets enabled. Click and test the connection using the Test button. You should get a response as shown. Click the OK button on the Test Result window. Click OK on the MySQL Connector/ODBC Data Source Configuration window. There are a number of other flags that you can set up using the 'Details' button. The defaults are acceptable for this article. You have successfully created a System DSN 'AccMySQL' as shown in the next window. Click OK. Verify the contents of TestMove The TestMove is a new database created in MySQL and as such it is empty as you verify in the following listing. Listing 2: Database TestMove is empty mysql> use testmove;Database changedmysql> show tables;Empty set (0.00 sec)mysql>
Read more
  • 0
  • 0
  • 9070

article-image-programmatically-creating-ssrs-report-microsoft-sql-server-2008
Packt
09 Oct 2009
4 min read
Save for later

Programmatically Creating SSRS Report in Microsoft SQL Server 2008

Packt
09 Oct 2009
4 min read
Introduction In order to design the MS SQL Server Reporting Services report programmatically you need to understand what goes into a report. We will start with a simple report shown in the next figure: The above tabular report gets its data from the SQL Server database TestNorthwind using the query shown below: Select EmployeeID, LastName, FirstName, City, Country from Employees. A report is based completely on a Report Definition file, a file in XML format. The file consists of information about the data connection, the datasource in which a dataset is defined, and the layout information together with the data bindings to the report. In the following, we will be referring to the Report Server file called RDLGenSimple.rdl. This is a file written in Report Definition Language in XML Syntax. The next figure shows this file opened as an XML file with the significant nodes collapsed. Note the namespace references. The significant items are the following: The XML Processing instructions The root element of the report collapsed and contained in the root element are: The DataSources Datasets Contained in the body are the ReportItems This is followed by the Page containing the PageHeader and PageFooter items In order to generate a RDL file of the above type the XMLTextWriter class will be used in Visual Studio 2008. In some of the hands-on you have seen how to connect to the SQL Server programmatically as well as how to retrieve data using the ADO.NET objects. This is precisely what you will be doing in this hands-on exercise. The XMLTextWriter Class In order to review the properties of the XMLTextWriter you need to add a reference to the project (or web site) indicating this item. This is carried out by right-clicking the Project (or Website) | Add Reference… and then choosing SYSTEM.XML (http://msdn.microsoft.com/en-us/library/system.xml.aspx) in the Add Reference window. After adding the reference, the ObjectBrowser can be used to look at the details of this class as shown in the next figure. You can access this from View | Object Browser, or by clicking the F2 key with your VS 2008 IDE open. A formal description of this can be found at the bottom of the next figure. The XMLTextWriter takes care of all the elements found in the XML DOM model (see for example, http://www.devarticles.com/c/a/XML/Roaming-through-XMLDOM-An-AJAX-Prerequisite). Hands-on exercise: Generating a Report Definition Language file using Visual Studio 2008 In this hands-on, you will be generating a server report that will display the report shown in the first figure. The coding you will be using is adopted from this article (http://technet.microsoft.com/en-us/library/ms167274.aspx) available  at Microsoft TechNet (http://technet.microsoft.com/en-us/sqlserver/default.aspx). Follow on In this section, you will create a project and add a reference. You add code to the page that is executed by the button click events. The code is scripted and is not generated by any tool. Create project and add reference You will create a Visual Studio 2008 Windows Forms Application and add controls to create a simple user interface for testing the code. Create a Windows Forms Application project in Visual Studio 2008 from File | New | Project… by providing a name. Herein, it is called RDLGen2. Drag-and-drop two labels, three buttons and two text boxes onto the form  as shown: When the Test Connection button Button1 in the code is clicked, a connection to the TestNorthwind database will be made. When the button is clicked, the code in the procedure Connection () is executed. If there are any errors, they will show up in the label at the bottom. When the Get list of Fields button Button2 in the code is clicked, the Query will be run against the database and the retrieved field list will be shown in the adjoining textbox. The Generate a RDL file button Button 3 in the code, creates a report file at the location indicated in the code.
Read more
  • 0
  • 0
  • 9056
article-image-getting-started-inkscape
Packt
09 Mar 2011
9 min read
Save for later

Getting Started with Inkscape

Packt
09 Mar 2011
9 min read
Inkscape 0.48 Essentials for Web Designers Use the fascinating Inkscape graphics editor to create attractive layout designs, images, and icons for your website   Vector graphics Vector graphics are made up of paths. Each path is basically a line with a start and end point, curves, angles, and points that are calculated with a mathematical equation. These paths are not limited to being straight—they can be of any shape, size, and even encompass any number of curves. When you combine them, they create drawings, diagrams, and can even help create certain fonts. These characteristics make vector graphics very different than JPEGs, GIFs, or BMP images—all of which are considered rasterized or bitmap images made up of tiny squares which are called pixels or bits. If you magnify these images, you will see they are made up of a grid (bitmaps) and if you keep magnifying them, they will become blurry and grainy as each pixel with bitmap square's zoom level grows larger. Computer monitors also use pixels in a grid. However, they use millions of them so that when you look at a display, your eyes see a picture. In high-resolution monitors, the pixels are smaller and closer together to give a crisper image. How does this all relate to vector-based graphics? Vector-based graphics aren't made up of squares. Since they are based on paths, you can make them larger (by scaling) and the image quality stays the same, lines and edges stay clean, and the same images can be used on items as small as letterheads or business cards or blown up to be billboards or used in high definition animation sequences. This flexibility, often accompanied by smaller file sizes, makes vector graphics ideal—especially in the world of the Internet, varying computer displays, and hosting services for web spaces, which leads us nicely to Inkscape, a tool that can be invaluable for use in web design. What is Inkscape and how can it be used? Inkscape is a free, open source program developed by a group of volunteers under the GNU General Public License (GPL). You not only get a free download but can use the program to create items with it and freely distribute them, modify the program itself, and share that modified program with others. Inkscape uses Scalable Vector Graphics (SVG), a vector-based drawing language that uses some basic principles: A drawing can (and should) be scalable to any size without losing detail A drawing can use an unlimited number of smaller drawings used in any number of ways (and reused) and still be a part of a larger whole SVG and World Wide Web Consortium (W3C) web standards are built into Inkscape which give it a number of features including a rich body of XML (eXtensible Markup Language) format with complete descriptions and animations. Inkscape drawings can be reused in other SVG-compliant drawing programs and can adapt to different presentation methods. It has support across most web browsers (Firefox, Chrome, Opera, Safari, Internet Explorer). When you draw your objects (rectangles, circles, and so on.), arbitrary paths, and text in Inkscape, you also give them attributes such as color, gradient, or patterned fills. Inkscape automatically creates a web code (XML) for each of these objects and tags your images with this code. If need be, the graphics can then be transformed, cloned, and grouped in the code itself, Hyperlinks can even be added for use in web browsers, multi-lingual scripting (which isn't available in most commercial vector-based programs) and more—all within Inkscape or in a native programming language. It makes your vector graphics more versatile in the web space than a standard JPG or GIF graphic. There are still some limitations in the Inkscape program, even though it aims to be fully SVG compliant. For example, as of version 0.48 it still does not support animation or SVG fonts—though there are plans to add these capabilities into future versions. Installing Inkscape Inkscape is available for download for Windows, Macintosh, Linux, or Solaris operating systems. To run on the Mac OS X operating system, it typically runs under X11—an implementation of the X Window System software that makes it possible to run X11-based applications in Mac OS X. The X11 application has shipped with the Mac OS X since version 10.5. When you open Inkscape on a Mac, it will first open X11 and run Inkscape within that program. Loss of some shortcut key options will occur but all functionality is present using menus and toolbars. Let's briefly go over how to download and install Inkscape: Go to the official Inkscape website at: http://www.inkscape.org/ and download the appropriate version of the software for your computer. For the Mac OS X Leopard software, you will also need to download an additional application. It is the X11 application package 2.4.0 or greater from this website: http://xquartz.macosforge.org/trac/wiki/X112.4.0. Once downloaded, double-click the X11-2.4.0.DMG package first. It will open another folder with the X11 application installer. Double-click that icon to be prompted through an installation wizard. Double-click the downloaded Inkscape installation package to start the installation. For the Mac OS, a DMG file is downloaded. Double-click on it and then drag and drop the Inkscape package to the Application Folder. For any Windows device, an .EXE file is downloaded. Double-click that file to start and complete the installation. For Linux-based computers, there are a number of distributions available. Be sure to download and install the correct installation package for your system. Now find the Inkscape icon in the Application or Programs folders to open the program. Double-click the Inkscape icon and the program will automatically open to the main screen. The basics of the software When you open Inkscape for the first time, you'll see that the main screen and a new blank document opened are ready to go. If you are using a Macintosh computer, Inkscape opens within the X11 application and may take slightly longer to load. The Inkscape interface is based on the GNOME UI standard which uses visual cues and feedback for any icons. For example: Hovering your mouse over any icon displays a pop-up description of the icon. If an icon has a dark gray border, it is active and can be used. If an icon is grayed out, it is not currently available to use with the current selection. All icons that are in execution mode (or busy) are covered by a dark shadow. This signifies that the application is busy and won't respond to any edit request. There is a Notification Display on the main screen that displays dynamic help messages to key shortcuts and basic information on how to use the Inkscape software in its current state or based on what objects and tools are selected. Main screen basics Within the main screen there is the main menu, a command, snap and status bar, tool controls, and a palette bar. Main menu You will use the main menu bar the most when working on your projects. This is the central location to find every tool and menu item in the program—even those found in the visual-based toolbars below it on the screen. When you select a main menu item the Inkscape dialog displays the icon, a text description, and shortcut key combination for the feature. This can be helpful while first learning the program—as it provides you with easier and often faster ways to use your most commonly used functions of the program. Toolbars Let's take a general tour of the tool bars seen on this main screen. We'll pay close attention to the tools we'll use most frequently. If you don't like the location of any of the toolbars, you can also make them as floating windows on your screen. This lets you move them from their pre-defined locations and move them to a location of your liking. To move any of the toolbars, from their docking point on the left side, click and drag them out of the window. When you click the upper left button to close the toolbar window, it will be relocated back into the screen. Command bar This toolbar represents the common and most frequently used commands in Inkscape: As seen in the previous screenshot you can create a new document, open an existing one, save, print, cut, paste, zoom, add text, and much more. Hover your mouse over each icon for details on its function. By default, when you open Inkscape, this toolbar is on the right side of the main screen. Snap bar Also found vertically on the right side of the main screen, this toolbar is designed to help with the Snap to features of Inkscape. It lets you easily align items (snap to guides), force objects to align to paths (snap to paths), or snap to bounding boxes and edges. Tool controls This toolbar's options change depending on which tool you have selected in the toolbox (described in the next section). When you are creating objects, it provides you all the detailed options—size, position, angles, and attributes specific to the tool you are currently using. By default, it looks like the following screenshot: (Move the mouse over the image to enlarge.) You have options to select/deselect objects within a layer, rotate or mirror objects, adjust object locations on the canvas, and scaling options and much more. Use it to define object properties when they are selected on the canvas. Toolbox bar You'll use the tool box frequently. It contains all of the main tools for creating objects, selecting and modifying objects, and drawing. To select a tool, click the icon. If you double-click a tool, you can see that tool's preferences (and change them). If you are new to Inkscape, there are a couple of hints about creating and editing text. The Text tool (A icon) in the Tool Box shown above is the only way of creating new text on the canvas. The T icon shown in the Command Bar is used only while editing text that already exists on the canvas.  
Read more
  • 0
  • 0
  • 8985

article-image-selinux-highly-secured-web-hosting-python-based-web-applications
Packt
21 Oct 2009
10 min read
Save for later

SELinux - Highly Secured Web Hosting for Python-based Web Applications

Packt
21 Oct 2009
10 min read
When contemplating the security of a web application, there are several attack vectors that you must consider. An outsider may attack the operating system by planting a remote exploit, exercising insecure operating system settings, or brandishing some other method of privilege escalation. Or, the outsider may attack other sites contained in the same server without escalating privileges. (Note that this particular discussion does not touch upon the conditions under which an attack steals data from a single site. Instead, I'm focusing on the ability to attack different applications on the same server.) With hosts providing space for large numbers of PHP-based sites, security can be difficult as the httpd daemon traditionally runs under the same Unix user for all sites. In order to prevent these kinds of attacks from occurring, you need to concentrate on two areas: Preventing the site from reading or modifying the data of another site, and Preventing the site from escalating privileges to tamper with the operating system and bypass user-based restrictions. There are two toolboxes you use to accomplish this. In the first case, you need to find a way to run all of your sites under different Linux users. This allows the traditional Linux filesystem security model to provide protection against a hacked site attacking other sites on the same server. In the second case, you need to find a way to prevent a privilege escalation to begin with and barring that, prevent damage to the operating system should an escalation occur. Let's first take a look at a method to run different sites under different users. The Python web framework provides several versatile methods by which applications can run. There are three common methods: first, using Python's built-in http server; second, running the script as a CGI application; and third, using mod_python under Apache (similar to what mod_perl and mod_php do). These methods have various disadvantages: respectively, a lack of scalability, performance issues due to CGI application loading, and the aforementioned “all sites under one user” problem. To provide a scalable, secure, high-performance framework, you can turn to a relatively new delivery method: mod_wsgi. This Apache module, created by Graham Dumpleton, provides several methods by which you can run Python applications. In this case, we'll be focusing on the “daemon” mode of mod_wsgi. Much like mod_python, the daemon mode of mod_wsgi embeds a Python interpreter (and the requisite script) into a httpd instance. Much like with mod_python, you can configure sites based on mod_wsgi to appear at various locations in the virtual directory tree and under different virtual servers. You can also configure the number and behavior of child daemons on a per-site basis. However, there is one important difference: with mod_wsgi, you can configure each httpd instance to run as a different Linux user. During operation, the main httpd instance dispatches requests to the already-running mod_wsgi children, producing performance results that rival mod_python. But most importantly, since each httpd instance is running under a different Linux user, you can apply Linux security mechanisms to different sites running on one server. Once you have your sites running on a per-user basis, you should next turn your attention to preventing privilege escalation and protecting the operating system. By default, the Targeted mode of SELinux provided by RedHat Enterprise Linux 5 (and its free cousins such as CentOS) provides strong protection against intrusions from httpd-based applications. Because of this, you will need to configure SELinux to allow access to resources such as databases and files that reside outside of the normal httpd directories. To illustrate these concepts, I'll guide you as you install a Trac instance under mod_wsgi. The platform is CentOS 5. As a side note, it's highly recommended that you perform the installation and SELinux debugging in a XEN instance so that your environment only contains the software that is needed. The sidebar explains how to easily install the environment that was originally used to perform this exercise, and I will assume that is your primary environment. There are a few steps that require the use of a C compiler – namely, the installation of Trac – and I'll guide you through migrating these packages to your XEN-based test environment. Installing Trac In this example, you'll use a standard installation of Trac. Following the instructions provided in the URL in the Resource section, begin by installing Trac 0.10.4 with ClearSilver 0.10.5 and SilverCity 0.9.7. (Note that with many Python web applications such as Trac and Django, “installing” the application means that you're actually installing the libraries necessary for Python to run the application. You'll need to run a script to create the actual site.) Next, create a PostgreSQL user and database on a different machine. If you are using XEN for your development machine, you can use a PostgreSQL database running in your main DOM0 instance; all we are concerned with is that the PostgreSQL instance is accessed on a different machine over the network. (Note that MySQL will also work in this example, but SQLite will not. In this case, we need a database engine that is accessed over the network, not as a disk file.) After that's done, you'll need to create an actual Trac site. Create a directory under /opt, such as /opt/trac. Next, run the trac_admin command and enter the information prompted. trac-admin /opt/trac initenv Installing mod_wsgi You can find mod_wsgi at the source listed in the Resources. After you make sure the httpd_devel package is installed, installing mod_wsgi is as simple as extracting the tarball and issuing the normal ./configure and 'make install' commands. Running Trac under mod_wsgi If you look under /opt/trac, you'll notice two directories: one labeled apache, and one with the label of the project that you assigned when you installed this instance of Trac. You'll start by creating an application script in the apache directory. The application script is listed in Listing 1. Listing 1: /opt/trac/apache/trac.wsgi #!/usr/bin/python import sys sys.stdout = sys.stderr import os os.environ['TRAC_ENV'] = '/opt/trac/test_proj' import trac.web.main application = trac.web.main.dispatch_request (Note the 'sys.stdout = sys.stderr' line. This is necessary due to the way WSGI handles communications between the Python script and the httpd instance. If there is any code in the script that prints to STDOUT (such as debug messages), then the httpd instance can crash.) After creating the application script, you'll modify httpd.conf to load the wsgi module and set up the Trac application. After the LoadModule lines, insert a line for mod_wsgi: LoadModule wsgi_module modules/mod_wsgi.so Next, go to the bottom of httpd.conf and insert the text in Listing 2. This text configures the wsgi module for one particular site; it can be used under the default httpd configuration as well as under VirtualHost directives. Listing 2: Excerpt from httpd.conf: WSGIDaemonProcess trac user=trac_user group=trac_user threads=25 WSGIScriptAlias /trac /opt/trac/apache/trac.wsgi WSGIProcessGroup trac WSGISocketPrefix run/wsgi <Directory /opt/trac/apache> WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> Note the WSGIScriptAlias identifier. The /trac keyword (first parameter) specifies where in the directory tree the application will exist. With this configuration, If you go to your server's root address, you'll see the default CenOS splash page. If you add /trac after the address, you'll hit your Trac instance. Save the httpd.conf file. Finally, add a Linux user called trac_user. It is important that this user should not have login privileges. When the root httpd instance runs and encounters the WSGIDaemonProcess directive noted above, it will fork itself as the user specified in the directive; the fork will then load Python and the indicated script.     Securing Your Site In this section, I'll focus on the two areas noted in the introduction: User based security and SELinux. I will touch briefly on the theory of SELinux and explain the nuts and bolts of this particular implementation in more depth. I highly recommend that you read the RedHat Enterprise Linux Deployment Guide for the particulars about how RedHat implements SELinux. As with all activities involving some risk, if you plan to implement these methods, you should retain the services of a qualified security consultant to advise you about your particular situation. Setting up the user-based security is not difficult. Because the HTTPD instance containing Python and the Trac instance will run under the Trac user, you can safely set everything under /opt/trac/test_project for read and execute (for directories) for user and none for group/all. By doing this, you will isolate this site from other sites and users on the system. Now, let's configure SELinux. First, you should verify that your system is running the proper Policy and Mode. On your development system, you'll be using the Targeted policy in its Permissive mode. If you choose to move your Python applications to a production machine, you would run under the Targeted policy, in the Enforcing mode. The Targeted policy is limited to protecting the most popular network services without making the system so complex as to prevent user-level work from being done. It is the only mode that ships with RedHat 5, and by extension, CentOS 5. In Permissive mode, SELinux policy violations are trapped and sent to the audit log, but the behavior is allowed. In enforcing mode, the violation is trapped and the behavior is not allowed. To verify the Mode, run the Security Level Configuration tool from the Administration menu. The SELinux tab, shown in Figure 1, allows you to adjust the Mode. After you have verified that SELinux is running in Permissive mode, you need to do two things. First, you need to change the Type of the files under /opt/trac. Second, you need to allow Trac to connect to the Postgres database that you configured when you installed Trac. First, you need to tweak the SELinux file types attached to the files in your Trac instance. These file types dictate what processes are allowed to access them. For example, /etc/shadow has a very restrictive 'shadow' type that only allows a few applications to read and write it. By default, SELinux expects web-based applications – indeed, anything using Apache – to reside under /var/www. Files created under this directory have the SELinux Type httpd_sys_content_t. When you created the Trac instance under /opt/trac, the files were created as type usr_t. Figure 2 shows the difference between these labels To properly label the files under /opt, issue the following commands as root: cd /optchcon -R -t httpd_user_content_t trac/ After the file types are configured, there is one final step to do: allow Trac to connect to PostgreSQL. In its default state, SELinux disallows outbound network connections for the httpd type. To allow database connections, issue the following command: setsebool -P httpd_can_network_connect_db=1 In this case, we are using the -P option to make this setting persistent. If you omit this option, then the setting will be reset to its default state upon the next reboot. After the setsebool command has been run, start HTTPD by issuing the following command: /sbin/service httpd start If you visit the url http://127.0.0.1/trac, you should see the Trac screen such as that in Figure 3.    
Read more
  • 0
  • 0
  • 8965

article-image-posting-your-wordpress-blog
Packt
16 Oct 2009
12 min read
Save for later

Posting on Your WordPress Blog

Packt
16 Oct 2009
12 min read
The central activity you'll be doing with your blog is adding posts. A post is like an article in a magazine; it's got a title, content, and an author (you). If a blog is like an online diary, then every post is an entry in that diary. A blog post also has a lot of other information attached to it, such as a date and categories. In this article, you will learn how to create a new post and what kind of information you can attach to it. Adding a simple post Let's review the process of adding a simple post to your blog. Whenever you want to do maintenance on your WordPress website, you have to start by logging in to the WP Admin (WordPress Administration panel) for your site. To get to the admin panel, just point your web browser to http://yoursite.com/wp-admin. Remember that if you have installed WordPress in a subfolder (for example, blog), then your URL has to include the subfolder (that is, http://yoursite.com/blog/wp-admin). When you first log into the WP Admin, you'll be at the Dashboard. The Dashboard has a lot of information on it. The very top bar, which I'll refer to as the top menu, is mostly dark grey and on the left, of course, is the main menu. The top menu and the main menu exist on every page within the WP Admin. The main section on the right contains information for the current page you're on. In this case, we're on the Dashboard. It contains boxes that have a variety of information about your blog, and about WordPress in general. The quickest way to get to the Add New Post page at any time is to click on the New Post link at the top of the page in the top bar (top menu). This is the Add New Post page: To quickly add a new post to your site, all you have to do is: Type in a title into the text field under Add New Post (for example, Making Lasagne). Type the text of your post in the content box. Note that the default view is Visual, but you actually have a choice of the HTML view as well. Click on the Publish button, which is at the far right. Note that you can choose to save a draft or view a preview of your post. In the following image, the title field, the content box, and the Publish button of the Add New Post page are highlighted: Once you click on the Publish button, you have to wait while WordPress performs its magic. You'll see yourself still on the Edit Post page, but now the following message has appeared telling you that your post was published and giving you a link to View post: If you go to the front page of your site, you'll see that your new post has been added at the top (newest posts are always at the top): Common post options Now that we've reviewed the basics of adding a post, let's investigate some of the other options on the Add New Post page. In this section we'll look at the most commonly used options, and in the next section we'll look at the more advanced options. Categories and tags Categories and tags are two similar types of information that you can add to a blog post. We use them to organize the information in your blog by topic and content (rather than just by, say, date), and to help visitors find what they are looking for on your blog. Categories are primarily used for structural organizing. They can be hierarchical. A relatively busy blog will probably have at least 10 categories, but probably not more than 15 or 20. Each post in this blog will likely have one to four categories assigned to it. For example, a blog about food might have these categories: Cooking Adventures, In The Media, Ingredients, Opinion, Recipes Found, Recipes Invented, and Restaurants. Tags are primarily used as shorthand for describing the topics covered in a particular blog post. A relatively busy blog will have anywhere from 15 to 30 tags in use. Each post in this blog will likely have three to ten tags assigned to it. For example, a post on the food blog about a recipe for butternut squash soup may have these tags: soup, vegetarian, autumn, hot, easy. Let's add a new post to the blog. This time, we'll give it not only a title and content, but also tags and categories. When adding tags, just type your list of tags into the Tags box on the right, separated by commas: Then click on the Add button. The tags you just typed in will appear below the text field with little xs next to them. You can click on an x to delete a tag. Once you've used some tags in your blog, you'll be able to click on the Choose from the most popular tags link in this box so that you can easily re-use tags. Categories work a bit differently than tags. Once you get your blog going, you'll usually just check the boxes next to existing categories in the Categories box. In this case, as we don't have any existing categories, we'll have to add one or two. In the Categories box on the right, click on the + Add New Category link. Type your category into the text field and click on the Add button. Your new category will show up in the list, already checked. Look at the following screenshot: If in the future you want to add a category that needs a parent category, select Parent category from the pull-down menu before clicking on the Add button. If you want to manage more details about your categories, move them around, rename them, assign parent categories, and assign descriptive text. You can do this on the Categories page, which we'll see in detail later in this article. Now fill in your title and content here: Click on the Publish button and you're done. When you look at the front page of your site, you'll see your new post on the top, your new category in the sidebar, and the tags and category (that you chose for your post) listed under the post itself: Adding an image to a post You may often want to have an image show up in your post. WordPress makes this very easy. Let's add an image to the post we just created. You can click on Edit underneath your post on the front page of your site to get there quickly. Alternatively, go back to the WP Admin, open Posts in the main menu, and then click on Edit underneath your new post. To add an image to a post, first you'll need to have that image on your computer. Before you get ready to upload an image, make sure that your image is optimized for the Web. Huge files will be uploaded slowly and slow down the process of viewing your site. You can re-size and optimize images using software such as GIMP or Photoshop. For the example in this article, I have used a photo of butternut squash soup that I have taken from the website where I got the recipe, and I know it's on the desktop of my computer. Once you have a picture on your computer and know where it is, follow these steps to add the photo to your blog post: Click on the little photo icon, which is next to the word Upload/Insert and below the box for the title: In the box that appears, click on the Select Files button and browse to your image. Then click on Open and watch the uploader bar. When it's done, you'll have a number of fields you can fill in: The only fields that are important right now are Title, Alignment, and Size. Title is a description for the image, Alignment will tell the image whether to have text wrap around it, and Size is the size of the image. As you can see, I've chosen the Right alignment and the Thumbnail size. Now click on Insert into Post. This box will disappear, and your image will show up in the post on the edit page itself: Now click on the Update Post button and go look at the front page of your site again. There's your image! You may be wondering about those image sizes. What if you want bigger or smaller thumbnails? You can set the pixel dimensions of your uploaded images and other preferences by opening Settings in the main menu and then clicking on Media. This takes you to the Media Settings page: Here you can specify the size of the uploaded images for: Thumbnail Medium Large If you change the dimensions on this page and click on the Save Changes button, only images you upload in the future will be affected. Images you've already uploaded to the site will have had their thumbnail, medium, and large versions created already using the old dimensions. Using the Visual editor versus the HTML editor WordPress comes with a Visual editor, otherwise known as a WYSIWYG editor (pronounced wissy-wig, which stands for What You See Is What You Get). This is the default editor for typing and editing your posts. If you're comfortable with HTML, you may prefer to write and edit your posts using the HTML editor—particularly useful if you want to add special content or styling. To switch from the rich text editor to the HTML editor, click on the HTML tab next to the Visual tab at the top of the content box: You'll see your post in all its raw HTML glory and you'll get a new set of buttons that lets you quickly bold and italicize text as well as add link code, image code, and so on. You can make changes and swap back and forth between the tabs to see the result. If you want the HTML tab to be your default editor, you can change this on your Profile page. Navigate to Users | Your Profile, and select the Disable the visual editor when writing checkbox. Drafts, timestamps, and managing posts There are three additional, simple but common, items I'd like to cover in this section: drafts, timestamps, and managing posts. Drafts WordPress gives you the option to save a draft of your post so that you don't have to publish it right away but can still save your work. If you've started writing a post and want to save a draft, just click on the Save Draft button at the right (in the Publish box), instead of the Publish button. Even if you don't click on the Save Draft button, WordPress will attempt to save a draft of your post for you about once a minute. You'll see this in the area just below the content box. The text will say Saving Draft... and then the time of the last draft saved: At this point, after a manual save or an auto-save, you can leave the Edit Post page and do other things. You'll be able to access all of your draft posts from the Dashboard or from the Edit Posts page. Timestamps WordPress will also let you alter the timestamp of your post. This is useful if you are writing a post today that you wish you'd published yesterday, or if you're writing a post in advance and don't want it to show up until the right day. The default timestamp will always be set to the moment you publish your post. To change it, just find the Publish box and click on the Edit link (next to the calendar icon and Publish immediately), and fields will show up with the current date and time for you to change: Change the details, click on the OK button, and then Publish your post (or save a draft). Managing posts If you want to see a list of your posts so that you can easily skim and manage them, you just need to go to the Edit Posts page in the WP Admin by navigating to Posts in the main menu. You'll see a detailed list of your posts, as seen in the next screenshot: There are so many things you can do on this page! You can: Choose a post to edit—click on a post title and you'll go back to the main Edit Post page Quick-edit a post—click on the Quick Edit link for any post and new options will appear right in the list, which will let you edit the title, timestamp, categories, tags, and more Delete one or more posts—click on the checkboxes next to the posts you want to delete, choose Delete from the Bulk Actions drop-down menu at the bottom, and click on the Apply button Bulk edit posts—choose Edit from the Bulk Actions menu at the bottom, click on the Apply button, and you'll be able to assign categories and tags to multiple posts, as well as edit other information about them You can experiment with the other links and options on this page. Just click on the pull-down menus and links, and see what happens.
Read more
  • 0
  • 0
  • 8869
article-image-optimizing-magento-performance-using-hhvm
Packt
16 May 2014
5 min read
Save for later

Optimizing Magento Performance — Using HHVM

Packt
16 May 2014
5 min read
(For more resources related to this topic, see here.) HipHop Virtual Machine As we can write a whole book (or two) about HHVM, we will just give the key ideas here. HHVM is a virtual machine that will translate any called PHP file into a HHVM byte code in the same spirit as the Java or .NET virtual machine. HHVM transforms your PHP code into a lower level language that is much faster to execute. Of course, the transformation time (compiling) does cost a lot of resources, therefore, HHVM is shipped with a cache mechanism similar to APC. This way, the compiled PHP files are stored and reused when the original file is requested. With HHVM, you keep the PHP flexibility and ease in writing, but you now have a performance like that of C++. Hear the words of the HHVM team at Facebook: "HHVM (aka the HipHop Virtual Machine) is a new open-source virtual machine designed for executing programs written in PHP. HHVM uses a just-in-time compilation approach to achieve superior performance while maintaining the flexibility that PHP developers are accustomed to. To date, HHVM (and its predecessor HPHPc) has realized over a 9x increase in web request throughput and over a 5x reduction in memory consumption for Facebook compared with the Zend PHP 5.2 engine + APC. HHVM can be run as a standalone webserver (in other words, without the Apache webserver and the "modphp" extension). HHVM can also be used together with a FastCGI-based webserver, and work is in progress to make HHVM work smoothly with Apache." If you think this is too good to be true, you're right! Indeed, HHVM have a major drawback. HHVM was and still is focused on the needs of Facebook. Therefore, you might have a bad time trying to use your custom made PHP applications inside it. Nevertheless, this opportunity to speed up large PHP applications has been seen by talented developers who improve it, day after day, in order to support more and more framework. As our interest is in Magento, I will introduce you Daniel Sloof who is a developer from Netherland. More interestingly, Daniel has done (and still does) an amazing work at adapting HHVM for Magento. Here are the commands to install Daniel Sloof's version of HHVM for Magento: $ sudo apt-get install git $ git clone https://github.com/danslo/hhvm.git $ sudo chmod +x configure_ubuntu_12.04.sh $ sudo ./configure_ubuntu_12.04.sh $ sudo CMAKE_PREFIX_PATH=`pwd`/.. make If you thought that the first step was long, you will be astonished by the time required to actually build HHVM. Nevertheless, the wait is definitely worth it. The following screenshot shows how your terminal will look for the next hour or so: Create a file named hhvm.hdf under /etc/hhvm and write the following code inside: Server { Port = 80 SourceRoot = /var/www/_MAGENTO_HOME_ } Eval { Jit = true } Log { Level = Error UseLogFile = true File = /var/log/hhvm/error.log Access { * { File = /var/log/hhvm/access.log Format = %h %l %u %t \"%r\" %>s %b } } } VirtualHost { * { Pattern = .* RewriteRules { dirindex { pattern = ^/(.*)/$ to = $1/index.php qsa = true } } } } StaticFile { FilesMatch { * { pattern = .*\.(dll|exe) headers { * = Content-Disposition: attachment } } } Extensions { css = text/css gif = image/gif html = text/html jpe = image/jpeg jpeg = image/jpeg jpg = image/jpeg png = image/png tif = image/tiff tiff = image/tiff txt = text/plain } } Now, run the following command: $ sudo ./hhvm –mode daemon –config /etc/hhvm.hdf The hhvm executable is under hhvm/hphp/hhvm. Is all of this worth it? Here's the response: ab -n 100 -c 5 http://192.168.0.105192.168.0.105/index.php/furniture/livingroom.html Server Software: Server Hostname: 192.168.0.105192.168.0.105 Server Port: 80 Document Path: /index.php/furniture/living-room.html Document Length: 35552 bytes Concurrency Level: 5 Time taken for tests: 4.970 seconds Requests per second: 20.12 [#/sec] (mean) Time per request: 248.498 [ms] (mean) Time per request: 49.700 [ms] (mean, across all concurrent requests) Transfer rate: 707.26 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 2 12.1 0 89 Processing: 107 243 55.9 243 428 Waiting: 107 242 55.9 242 427 Total: 110 245 56.7 243 428 We literally reach a whole new world here. Indeed, our Magento instance is six times faster than after all our previous optimizations and about 20 times faster than the default Magento served by Apache. The following graph shows the performances: Our Magento instance is now flying at lightening speed, but what are the drawbacks? Is it still as stable as before? All the optimization we did so far, are they still effective? Can we go even further? In what follows, we present a non-exhaustive list of answers: Fancy extensions and modules may (and will) trigger HHVM incompatibilities. Magento is a relatively old piece of software and combining it with a cutting edge technology such as HHVM can have some unpredictable (and undesirable) effects. HHVM is so complex that fixing a Magento-related bug requires a lot of skill and dedication. HHVM takes care of PHP, not of cache mechanisms or the accelerator we installed before. Therefore, APC, memcached, and Varnish are still running and helping to improve our performances. If you become addicted to performances, HHVM is now supporting Fast-CGI through Nginx and Apache. You can find out more about that at: http://www.hhvm.com/blog/1817/fastercgi-with-hhvm. Summary In this article, we successfully used the HipHop Virtual Machine (HHVM) from Facebook to serve Magento. This improvement optimizes our Magento performance incredibly (20 times faster), that is, the time required initially was 110 seconds while now it is less then 5 seconds. Resources for Article: Further resources on this subject: Magento: Exploring Themes [article] Getting Started with Magento Development [article] Enabling your new theme in Magento [article] Call Send SMS Add to Skype You'll need Skype CreditFree via Skype
Read more
  • 0
  • 0
  • 8858

article-image-oracle-bpm-suite-11gr1-creating-bpm-application
Packt
25 Sep 2010
7 min read
Save for later

Oracle BPM Suite 11gR1: Creating a BPM Application

Packt
25 Sep 2010
7 min read
Getting Started with Oracle BPM Suite 11gR1 – A Hands-On Tutorial Learn from the experts – teach yourself Oracle BPM Suite 11g with an accelerated and hands-on learning path brought to you by Oracle BPM Suite Product Management team members Offers an accelerated learning path for the much-anticipated Oracle BPM Suite 11g release Set the stage for your BPM learning experience with a discussion into the evolution of BPM, and a comprehensive overview of the Oracle BPM Suite 11g Product Architecture Discover BPMN 2.0 modeling, simulation, and implementation Understand how Oracle uses standards like Services Component Architecture (SCA) to simplify the process of application development Describes how Oracle has unified services and events into one platform Built around an iterative tutorial, using a real-life scenario to illustrate all the key features Full of illustrations, diagrams, and tips for developing SOA applications faster, with clear step-by-step instructions and practical examples Written by Oracle BPM Suite Product Management team members   Read more about this book (For more resources on Oracle, see here.) The BPM Application consists of a set of related business processes and associated shared artifacts such as Process Participants and Organization models, User Interfaces, Services, and Data. The process-related artifacts such as Services and Data are stored in the Business Catalog. The Business Catalog facilitates collaboration between the various stakeholders involved in the development of the business process. It provides a mechanism for Process Developer (IT) to provide building blocks that can in turn be used by the process analyst in implementing the business process. Start BPM Studio using Start | Programs | Oracle Fusion Middleware 11.1.1.3 | Oracle JDeveloper 11.1.1.3. BPM Studio supports two roles or modes of process development. The BPM Role is recommended for Process Analysts and provides a business perspective with focus on business process modeling. The Default Role is recommended for Process Developers for refinement of business process models and generation of implementation artifacts to complete the BPM Application for deployment and execution. Tutorial: Creating SalesQuote project and modeling RequestQuote process This is the beginning of the BPM 11gR1 hands-on tutorial. Start by creating the BPM Application and then design the Sales Quote business process. Open BPM Studio by selecting the BPM Process Analyst role when you start JDeveloper or if JDeveloper is already open, select Preferences from the Tools menu and in the Roles section, select BPM Process Analyst. Go to File | New to launch the Application wizard. In the New Gallery window, select Applications in the Categories panel. Select BPM Application in the Items panel. Specify the Application Name —SalesQuoteLab; the folder name should also be set to SalesQuoteLab. Click on the Next button. Enter QuoteProcessLab for the Project Name. Click on the Finish button. Go to the View | BPM Project Navigator. The BPM Project Navigator opens up the QuoteProcessLab – BPM Project that you just created. A single BPM Project can contain multiple related business processes. Notice that the BPM Project contains several folders. Each folder is used to store a specific type of BPM artifact. The Processes folder stores BPMN business process models; the Activity Guide folder is used to store the process milestones; the Organization folder stores Organization model artifacts such as Roles and Organization Units; the Business Catalog folder stores Services and Data; the Simulation folder stores simulation models to capture what-if scenarios for the business process and the Resources folder holds XSLT data transformation artifacts. To create a new business process model, you need to right-click on Processes and select New | Process. This launches the Create BPMN Process wizard. Select From Pattern option and select the Manual Process pattern. Recall that the Sales Quote Process is instantiated when Enter Quote task gets assigned to the Sales Representative role. The Asynchronous Service and the Synchronous Service patterns are used to expose the BPMN process as a Service Provider. Click on the Next button. Specify the name for the Process—RequestQuoteLab. Click on the Finish button. This creates a RequestQuoteLab process with a Start Event (thin circle) and End Event (thicker circle) of type None with a User Task in between. The User Task represents a human step that is managed by the BPM run-time engine—workflow component. The Start Event of type None signifies that there is no external event triggering the process. The first activity after the Start Event creates the process instance. In addition, a default swim lane—Role, gets created. In BPM Studio, the swim lanes in the BPMN process point to logical roles. A logical role represents a process participant (user or group) and needs to be mapped to physical roles (LDAP users/groups) before the process is deployed. Right-click on the User Task step, select Properties, and specify the name Enter Quote Details for the step. Click on the OK button. The next step is to rename the default created role to SalesRep. Navigate to QuoteProcessLab—BPM Project node and select the Organization node underneath it. Right-click on the Organization node and select Open. This opens up the Organization pane. Highlight the default role named Role and use the pencil icon to edit it to be SalesRep. Click on the + sign to add the following roles—Approvers, BusinessPractices, and Contracts. The following screenshot shows the list of roles that you just created: Close the Organization window. Go back to the RequestQuoteLab—process model. The participant for the Enter Quote Details—User Task is now set to the SalesRep role. Ignore the yellow triangular symbol with the exclamation for now. It indicates that certain configuration information is missing for the activity. Right-click on the process diagram just below the SalesRep-Lane. Choose the Add Role option. Choose Business Practices from the list of options available. Click on the OK button. Open the View | Component Palette. Drag and drop a User Task from the Interactive Tasks section of the BPMN Component Palette. Note: The Interactive Tasks refers to a step that is managed by the workflow engine. The Assignees (Participants) represent the business users who need to carry out the Interactive Task. The associated Task (work to be performed) is shown in the inbox of the assignees (similar to Email Inbox) when the Interactive Task is triggered. The User Task is the simplest type of Interactive Task where the assignee of the task is set to a single role. The actual work is performed only when the Assignee executes on his Task. The Task is presented to the Assignee through a browser based worklist application. In BPM Studio, the Assignee is automatically set to the role associated with the swim lane into which the Interactive Task is dropped. Place this new User Task after the existing Enter Quote Details—User Task by hovering on the center of the connector until it turns blue and name it Business Practices Review. The connection lines are automatically created. Drag the new Business Practices Review—User Task to the Business Practices lane. The performer or assignee for the Business Practices Review—User Task is automatically set to Business Practices—role. Create two more lanes for Approvers and Contracts. Drag and drop three User Tasks on to the process diagram, one following the other, and name them Approve Deal, Approve Terms, and Finalize Contracts respectively. Pin the Approve Deal step to the Approvers Lane. Pin the other two User Tasks—Approve Terms and Finalize Contracts steps to the Contracts Lane. Finally add a Service Task right after the Finalize Contracts step from the BPM Component Palette and name it Save Quote. The modified diagram should look like the following:
Read more
  • 0
  • 0
  • 8746
Modal Close icon
Modal Close icon