Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-using-aspnet-master-pages-your-mcms-applications
Packt
29 Apr 2010
6 min read
Save for later

Using ASP.NET Master Pages in your MCMS Applications

Packt
29 Apr 2010
6 min read
Overview and Benefits of Master Pages A master page includes common markup and one or more content placeholders. From this master page, new content pages can be created, which include content controls that are linked to the content placeholders in the master page. This provides an ideal way to separate common site branding, navigation, etc., from the actual content pages, and can significantly decrease duplication and markup errors. Master pages make working with template files a lot easier than before. You can add common markup and shared controls such as headers, footers, and navigation bars to master pages. Once a master page has been built, you can create MCMS template files based upon it. The template files will immediately adopt the look and feel defined in the master template. You can also mark certain regions of the master page to be customizable by introducing content placeholders (note that these are controls designed specifically for master pages and are not to be confused with MCMS placeholder controls). The space marked by content placeholders provides areas where you could add individual template markup as well as MCMS placeholder controls, as shown in the following diagram: Although at first glance both master pages and MCMS templates offer a way to standardize the look and feel of a site, their similarities end there. Don’t be mistaken into thinking that master pages take over the role of MCMS templates completely. A key difference between the two is that the use of master pages is reserved solely for site developers to ensure that template files created have a common look and feel. You can’t create a master page and expect authors to use it to create postings. In fact, master pages work alongside template files and offer a number of benefits to MCMS developers. Avoids duplication in MCMS Template files: Often MCMS templates contain common page layout code (usually an HTML table) along with navigation bars, headers, and footers (usually web user controls). This code has to be copied and pasted into each new template file after it is created or abstracted into user controls. In addition a change in the layout of this common code has to be applied to all template files. So, for example, an MCMS application with ten template files will duplicate this markup ten times. By placing this markup within a master page, this duplication can be removed. Separation of site-wide markup from template markup: One of the biggest drawbacks to MCMS is that the task of developing templates cannot be easily separated. It is a common requirement to separate the tasks of defining site branding, layout, and the development of controls such as navigation (performed by webmasters and programmers) from the task of designing template layouts (performed by business users). While master pages and Visual Studio 2005 do not address this completely due to MCMS’s inherent architecture, they offer a substantial improvement in this area. Avoids issues with MCMS Template File Visual Studio templates: The MCMS Project Item Templates have a number of issues, and do not fully embrace the Visual Studio 2005 project system. Although any web form can be MCMS ‘enabled’, master pages offer a more seamless development experience with less manual tweaks required. Visual Studio 2005 Designer support: One of the common problems with using user controls within template files in Visual Studio .NET is that the template design view doesn't provide an adequate experience for template developers. Visual Studio 2005 offers an improved design-view experience including rendering of user control content, and this is especially valuable when working with master pages. Experience of Master Pages: Just as MCMS is a great way to learn ASP.NET, MCMS SP2 is a great way to learn ASP.NET 2.0! In addition, master pages are a fundamental building block of future Web Content Management offerings from Microsoft. MCMS placeholder controls in the master page will work, but are not officially supported. As we will see in this article, master pages provide an ideal way to separate common site branding, navigation, etc., from the actual content pages, and can significantly decrease duplication and markup errors. The TropicalGreen Web Site Tropical Green is the fictitious gardening society upon which the article’s sample website is based. In the book, Building Websites with Microsoft Content Management Server from Packt Publishing (ISBN 1-904811-16-7), we built the Tropical Green website from scratch using ASP.NET 1.x. In this article series, we will attempt to rebuild parts of the website using MCMS SP2 and ASP.NET 2.0. While the code will be rewritten from the ground-up, we won’t start with a blank database. Instead, we’ll take a shortcut and import the TropicalGreen database objects from the TropicalGreen.sdo file available from the support section on Packt Publishing’s website (http://www.packtpub.com/support). Importing the TropicalGreen Site Deployment Object File Before we begin, let’s populate the database by importing objects using the Site Deployment Manager. Download the TropicalGreen.sdo file. Open Site Manager and log in with an MCMS administrator account. From the menu, select File | Package | Import…. In the Site Deployment Import dialog, click on the Browse… button. Navigate to the TropicalGreen_Final.sdo file downloaded earlier. In the Container Rules tab, set the following: Property Value When Adding Containers Use package container rights When Replacing Containers Keep destination container rights In the Rights Group tab, set the following: Property Value Select how Rights Groups are imported Import User Rights Groups Click on Import. The import confirmation dialog appears. Click on Continue. Creating a New MCMS Web Application To get started, let’s create a new MCMS web application using the project templates we created in the previous article. From Visual Studio, from the File Menu, choose New | Web Site. In the New Web Site dialog, select the MCMS SP2 Web Application icon in the My Templates section. Select HTTP in the Location list box. Enter http://localhost/TropicalGreen in the Location textbox, and click on OK. Our MCMS web application is created and opened in Visual Studio 2005.
Read more
  • 0
  • 0
  • 2999

article-image-ibm-cognos-insight
Packt
12 Sep 2013
9 min read
Save for later

IBM Cognos Insight

Packt
12 Sep 2013
9 min read
(For more resources related to this topic, see here.) An example case for IBM Cognos Insight Consider an example of a situation where an organization from the retail industry heavily depends on spreadsheets as its source of data collection, analysis, and decision making. These spreadsheets contain data that is used to analyze customers' buying patterns across the various products sold by multiple channels in order to boost the sales across the company. The analysis hopes to reveal customers' buying patterns demographically, streamline sales channels, improve supply chain management, give an insight into forecast spending, and redirect budgets to advertising, marketing, and human capital management, as required. As this analysis is going to involve multiple departments and resources working with spreadsheets, one of the challenges will be to have everyone speak in similar terms and numbers. Collaboration across departments is important for a successful analysis. Typically in such situations, multiple spreadsheets are created across resource pools and segregated either by time, product, or region (due to the technical limitations of spreadsheets) and often the analysis requires the consolidation of these spreadsheets to be able to make the educated decision. After the number-crunching, a consolidated spreadsheet showing high level summaries is sent out to executives, while the details remain on other tabs within the same spreadsheet or on altogether separate spreadsheet files. This manual procedure has a high probability of errors. The similar data analysis process in IBM Cognos Insight would result in faster decision making by keeping the details and the summaries in a highly compressed Online Analytical Processing (OLAP) in-memory cube. Using the intuitive drag-and-drop functionality or the smart-metadata import wizard, the spreadsheet data now appears instantaneously (due to the in-memory analysis) in a graphical and pivot table format. Similar categorical data values, such as customer, time, product, sales channel and retail location are stored as dimension structures. All the numerical values bearing factual data such as revenue, product cost, and so on, defined as measures are stored in the OLAP cube along with the dimensions. Two or more of these dimensions and measures together form a cube view that can be sliced and diced and viewed at a summarized or a detailed level. Within each dimension, elements such as customer name, store location, revenue amount generated, and so on, are created. These can be used in calculations and trend analysis. These dimensions can be pulled out on the analysis canvas as explorer points that can be used for data filtering and sorting. Calculations, business rules and differentiator metrics can be added to the cube view to enhance the analysis. After enhancements to the IBM Cognos Insight workspace have been saved, these workspaces or fi les can be e-mailed and distributed as offline analyses. Also, the users have the option to publish the workspace into the IBM Cognos Business Intelligence web portal, Cognos Connection or IBM Cognos Express, both of which are targeted to larger audiences, where this information can be shared with broader workgroups. Security layers can be included to protect sensitive data, if required. The publish-and-distribute option within IBM Cognos Insight is used for advanced analytics features and write-back functionality in larger deployments. where, the users can modify plans online or offline, and sync up to the enterprise environment on an as-and-when basis. As an example, the analyst can create what-if scenarios for business purposes to simulate the introduction of a new promotion price for a set of smart phones during high foot traffic times to drive up sales. Or simulating an extension of store hours during summer months to analyze the effects on overall store revenue can be created. The following diagram shows the step-by-step process of dropping a spreadsheet into IBM Cognos Insight and viewing the dashboard and the scorecard style reports instantaneously, which can then be shared on the IBM Cognos BI web-portal or published to an IBM TM1 environment. The preceding screenshot demonstrates the steps from raw data in spreadsheets being imported into IBM Cognos Insight to reveal a dashboard style report instantaneously. Additional calculations to this workspace creates scorecard type graphical variances, thus giving an overall picture through rich graphics. Using analytics successfully Over the past few years, there have been huge improvements in the technology and processes of gathering the data. Using Business Analytics and applications such as IBM Cognos Insight we can now analyze and accurately measure anything and everything. This leads to the question: Are we using Analytics successfully? The following high-level recommendations should be used as a guidance for organizations that are either attempting a Business Analytics implementation for the first time or for those who are already involved with Business Analytics, both working towards a successful implementation: The first step is to prioritize the targets that will produce intelligent analytics from the available trustworthy data. Choosing this target wisely and thoughtfully has an impact on the success rate of the implementation. Usually, these are high value targets that need problem solving and/or quick wins to justify the need and/or investment towards a Business Analytics solution. Avoid the areas with a potential for probable budget cuts and/or involving corporate cultural and political battles that are considered to be the major factors leading to an implementation pitfall. Improve your chances by asking the question—where will we achieve maximum business value? Selecting the appropriate product to deliver the technology is the key for success—a product that is suitable for all the skill levels and that can be supported by the organization's infrastructure. IBM Cognos Insight is one such product where the learning curve is minimal; thanks to its ease of use and vast features. The analysis produced by using IBM Cognos Insight can then be shared by publishing to an enterprise-level solution such as IBM Cognos BI, IBM Cognos Express, or IBM TM1. This product reduces dependencies on IT departments in terms of personnel and IT resources due to the small learning curve, easy setup, intuitive look, feel, and vast features. The sharing and collaborating capabilities eliminate the need for multiple silos of spreadsheets, one of the reasons why organizations want to move towards a more structured and regulated Enterprise Analytics approach. Lastly, organize a governing body such as a Analytics Competency Center (ACC) or Analytics Center of Excellence (ACE) that has the primary responsibility to do the following: Provide the leadership and build the team Plan and manage the Business Analytics vision and strategy (BA Roadmap) Act as a governing body maintaining standardization at the Enterprise level Develop, test, and deliver Business Analytic solutions Document all the processes and procedures, both functional and technical Train and support end users of Business Analytics Find ways to increase the Return on Investment (ROI) Integrate Business Analytics into newer technologies such as mobile and cloud computing The goals of a mature, enterprise-wide Analytics solution is when any employee within the organization, be it an analyst to an executive, or a member of the management team, can have their business-related questions answered in real time or near real time. These answers should also be able to predict the unknown and prepare for the unforeseen circumstances better. With the success of a Business Analytics solution and realized ROI, a question that should be asked is—are the solutions robust and flexible enough to expand regionally/globally? Also, can it sustain a merger or acquisition with minimal consolidation efforts? If the Business Analytics solution provides the confidence in all of the above, the final question should be—can the Business Analytics solution be provided as a service to the organizations' suppliers and customers? In 2012, a global study was conducted jointly by IBM's Institute of Business Value (IBV) and MIT Sloan Management Review. This study, which included 1700 CEOs globally, reinforced the fact that one of the top objectives within their organizations was sharing and collaboration. IBM Cognos Insight, the desktop analysis application, provides collaborative features that allow the users to launch development efforts via IBMs Cognos Business Intelligence, Cognos Express, and Performance Management enterprise platforms. Let us consider a fictitious company called PointScore. Having completed its marketing, sales, and price strategy analysis, PointScore is now ready to demonstrate its research and analysis efforts to its client. Using IBM Cognos Insight, PointScore has three available options. All of these will leverage the Cognos Suite of products that its client has been using and is familiar with. Each of these options can be used to share the information with a larger audience within the organization. Though technical, this article is written for a non-technical audience as well. IBM Cognos Insight is a product that has its roots embedded in Business Intelligence and its foundation is built upon Performance Management solutions. This article provides the readers with Business Analytics techniques and discusses the technical aspects of the product, describing its features and benefits. The goal of writing this article was to make you feel confident about the product. This article is meant to expand on your creativity so that you can build better analysis and workspaces using Cognos Insight. The article focuses on the strengths of the product, which is to share and collaborate the development efforts into an existing IBM Cognos BI, Cognos Express, or TM1 environment. This sharing is possible because of the tight integration among all the products under the IBM Business Analytics umbrella. Summary After reading this article, you should be able to tackle Business Analytics implementations It will also help you to leverage the sharing capability to reach an end goal of spreading the value of Business Analytics throughout their organizations. Resources for Article: Further resources on this subject: How to Set Up IBM Lotus Domino Server [Article] Tips and Tricks on IBM FileNet P8 Content Manager [Article] Reporting Planning Data in IBM Cognos 8: Publish and BI Integration [Article]
Read more
  • 0
  • 0
  • 2999

article-image-new-modules-moodle-2
Packt
07 Mar 2011
5 min read
Save for later

New Modules for Moodle 2

Packt
07 Mar 2011
5 min read
  Moodle 2.0 First Look Discover what's new in Moodle 2.0, how the new features work, and how it will impact you         Read more about this book       (For more resources on Moodle, see here.) Blogs—before and after There has always been a blogging option in a standard Moodle install. However, some users have found it unsatisfactory because of the following reasons: The blog is attached to the user profile so you can only have one blog There is no way to attach a blog or blog entry to a particular course There is no way for other people to comment on your blog For this reason, alternative blog systems (such as the contributed OU blog module) have become popular as they give users a wider range of options. The standard blog in Moodle 2.0 has changed, and now: A blog entry can optionally be associated with a course It is possible to comment on a blog entry Blog entries from outside of Moodle can be copied in It is now possible to search blog entries Where's my blog? Last year when Emma studied on Moodle 1.9, if she wanted to make a blog entry she would click on her name to access her profile and she'd see a blog tab like the one shown in following screenshot: Alternatively, if her tutor had added the blog menu block, she could click on Add a new entry and create her blog post there as follows: The annoyance was that if she added a new entry in the blog menu of her ICT course, her classmates in her Art course could see that entry (even, confusingly, if the blog menu had a link to entries for just that course). If we follow Emma into the Beginners' French course in Moodle 2.0, we see that she can access her profile from the navigation block by clicking on My profile and then selecting View Profile. (She can also view her profile by clicking on her username as she could in Moodle 1.9). If she then clicks on Blogs she can view all the entries she made anywhere in Moodle and can also add a new entry: As before, Emma can also add her entry through the blog menu, so let's take a look at that. Her tutor, Stuart needs to have added this block to the course. The Blog Menu block To add this to a course a teacher such as Stuart needs to turn on the editing and select Blog menu from the list of available blocks: The Blog menu displays the following links: View all entries for this course: Here's where Emma and others can read blog entries specific to that course. This link shows users all the blog posts for the course they are currently in. View my entries about this course: Here's where Emma can check the entries she has already made associated with this course. This link shows users their own blog posts for the course they are currently in. Add an entry about this course: Here's where Emma can add a blog entry related only to this course. When she does that, she is taken to the editing screen for adding a new blog entry, which she starts as shown in the following screenshot: Just as in Moodle 1.9, she can attach documents, choose to publish publicly or keep to herself and add tags. The changes come as we scroll down. At the bottom of the screen is a section which associates her entry with the course she is presently in: Once she has saved it, she sees her post appear as follows: View all of my entries: Here Emma may see every entry she has made, regardless of which course it was in or whether she made it public or private. Add a new entry: Emma can choose to add a new blog entry here (as she could from her profile) which doesn't have to be specific to any particular course. If she sets it to "anyone on this site", then other users can read her blog wherever they are in Moodle. Search: At the bottom of the Blog menu block is a search box. This enables users to enter a word or phrase and see if anyone has mentioned it in a blog entry The Recent Blog Entries block As our teacher in the Beginners' French course Stuart has enabled the Recent Blog Entries block, there is also a block showing the latest blog entries. Emma's is the most recent entry on the course so hers appears as a link, along with all other recent course entries. Course specific blogs Just to recap and double check—if Emma now visits her other course, How to Be Happy and checks out the View my entries about this course entries link in the Blog menu, she does not see her French course blog post, but instead, sees an entry she has associated with this course: The tutor for this course, Andy, has added the blog tags block. The blog tags block This block is not new; however, it's worth pointing out that the tags are NOT course-specific, and so Emma sees the tags she added to the entries in both courses alongside the tags from other users:  
Read more
  • 0
  • 0
  • 2998

article-image-using-phalcon-models-views-and-controllers
Packt
20 Jan 2014
3 min read
Save for later

Using Phalcon Models, Views, and Controllers

Packt
20 Jan 2014
3 min read
(For more resources related to this topic, see here.) Creating CRUD scaffolding CRUD stands for create, read, update, and delete, which are the four basic functions our application should do with our blog post records. Phalcon web tools will also help us to get these built. Click on the Scaffold tab on the web tools page and you will see a page as shown in the following screenshot: Select posts from the Table name list and volt from the Template engine list, and check Force this time, because we are going to force our new files to overwrite the old model and controller files that we just generated. Click on the Generate button and some magic should happen. Browse to http://localhost/phalconBlog/posts and you will see a page like the following screenshot: We finally have some functionality we can use. We have no posts, but we can create some. Click on the Create posts link and you will see a page similar to the one we were just at. The form will look nearly the same, but it will have a Create posts heading. Fill out the Title, Body, and Excerpt fields and click on the Save button. The form will post, and you will get a message stating that the post was created successfully. This will take you back to the post's index page. Now you should be able to search for and find the post you just created. If you forgot what you posted, you can click on Search without entering anything in the fields, and you should see a page like the following screenshot: This is not a very pretty or user-friendly blog application. But it got us started, and that's all we need. The next time we start a Phalcon project, it should only take a few minutes to go through these steps. Now we will look over our generated code, and as we do, modify it to make it more blog-like. Summary In this article, we worked on the model, view, and controller for the posts in our blog. To do this, we used Phalcon web tools to generate our CRUD scaffolding for us. Then, we modified this generated code so it would do what we need it to do. We can now add posts. We also learned about the Volt template engine. Resources for Article: Further resources on this subject: Using An Object Oriented Approach for Implementing PHP Classes to Interact with Oracle [Article] FuelPHP [Article] Installing PHP-Nuke [Article]
Read more
  • 0
  • 0
  • 2996

article-image-module-development-joomla
Packt
22 Oct 2009
4 min read
Save for later

Module Development in Joomla

Packt
22 Oct 2009
4 min read
Introduction Modules in Joomla can be used to fetch and display data almost anywhere on a page in a website.In this article, we will cover the following topics on module development. Registering the module in the database Getting and setting parameters Centralizing data access and output using helper classes Selecting display options using layouts Displaying the latest reviews Displaying a random review We will assume that we have a table in our database called jos_modules with the following fields title, ordering, position, published, module, showtitle, and params. We will also assume that we have a website that reviews different restaurants. However, visitors have to go to the component to see the reviews. We would be developing a module so that we could pull the content directly from the reviews and display them. Registering the Module in the Database As with the component, we will have to register the module in the database so that it can be referenced in the back end and used effectively. Entering a record into the jos_modules table will take care of this. Open your database console and enter the following query: INSERT INTO jos_modules (title, ordering, position, published, module, showtitle, params) VALUES ('Restaurant Reviews', 1, 'left', 1, 'mod_reviews', 1, 'style=simplenitems=3nrandom=1'); If you're using phpMyAdmin, enter the fields as in the following screen: If you refresh the front end right after entering the record in jos_modules, you'll notice that the module doesn't appear, even though the published column is set to 1. To fix this, go to Extensions | Module Manager in the back end and click the Restaurants Reviews link. Under Menu Assignment, select All and click Save. In the front end, the left-hand side of your front page should look similar to the following: Creating and Configuring a Basic Module Modules are both simple and flexible. You can create a module that simply outputs static text or one that queries remote databases for things like weather reports. Although you can create rather complex modules, they're best suited for displaying data and simple forms. You will not typically use a module for complex record or session management; you can do this through a component or plug-in instead. To create the module for our reviews, we will have to create a directory mod_reviews under /modules. We will also need to create the mod_reviews.php file inside mod_reviews. To start, we'll create a basic module that displays links to the most recent reviews. In the mod_reviews.php file, add the following code: <?php defined('_JEXEC') or die('Restricted access'); $items = $params->get('items', 1); $db =& JFactory::getDBO(); $query = "SELECT id, name FROM #__reviews WHERE published = '1' ORDER BY review_date DESC"; $db->setQuery( $query, 0, $items ); $rows = $db->loadObjectList(); foreach($rows as $row) { echo '<a href="' . JRoute::_('index.php?option=com_reviews&id=' . $row->id . '&task=view') . '">' . $row->name . '</a><br />'; } ?> When you save the file and refresh the homepage, your module should look similar to the following: When the module is loaded, the $params object is pulled into scope and can be used to get and set the parameters. When we added the row into jos_modules, the params column contained three values: one for items (set to 3), one for style (set to simple), and another for random (set to 1). We set $items to the parameter items using the get() member function, defaulting to 1 if no value exists. If desired, you can use the member function set($name, $value) to override or add a parameter for your module. After getting a database object reference, we write a query to select the id and name form jos_reviews and order reverse chronologically by the published date. We use the second and third parameters of setQuery() to generate a LIMIT clause that is automatically added to the query. This ensures that the correct syntax is used for the database type. Once the query is built, we load all the relevant database rows, go through them, and provide a link to each review.
Read more
  • 0
  • 0
  • 2996

article-image-faq-celtx
Packt
14 Mar 2011
3 min read
Save for later

FAQ on Celtx

Packt
14 Mar 2011
3 min read
Celtx: Open Source Screenwriting Beginner's Guide Write and market Hollywood-perfect movie scripts the free way! Q: What is Celtx? A: Celtx developers describe this software package as "the world's first all-in-one media pre-production system." (http://celtx.com/overview. html) We are told that Celtx: Can be used for the complete production process Lets you write scripts, storyboard scenes, and sketch setups Develop characters, breakdown, and tag elements Schedule productions plus generate useful reports Celtx is powerful software yet simple to use. It can be used in writing the various types of scripts already mentioned, including everything independent filmmakers and media creators of all types need. This includes writing, planning, scheduling, and generating reports during the various stages of all sorts of productions. The following screenshot is an example of a Celtx report screen: Q: What does the acronym, Celtx, stand for? A: The name Celtx is an acronym for Crew, Equipment, Location, Talent, and XML. Q: How far-reaching is the impact of Celtx? A: The Celtx website says that more than 500,000 media creators in 160 countries use Celtx in 33 different languages. Independent filmmakers and studio professionals, and students in over 1,800 universities and film schools have adopted Celtx for teaching and class work submission. Celtx is supported by the Celtx community of volunteer developers and a Canadian company, Greyfirst Corp. in St. John's, Newfoundland. A major reason Celtx can be an open source program is that it is built on non-proprietary standards, such as HTML and XML (basic web mark-up languages) and uses other open source programs (specifically Mozilla's engine, the same used in the Firefox browser) for basic operations. Q: What sets Celtx apart from other free screenwriting software that is available? A: An important concept of Celtx's power is that it's a client-server application. This means only part of Celtx is in that download installed on your computer. The rest is out there in the cloud (the latest buzz term for servers on the Internet). Cloud computing (using remote servers to do part of the work) allows Celtx to have much more sophisticated features, in formatting and collaboration especially, than is normally found in a relatively small free piece of software. It's rather awesome actually. Celtx, by the way, has you covered for PC, Mac, all kinds of Linux, and even eeePC Netbooks. Q: Does Celtx qualify as a web application? A: Celtx is really a web application. We have the advantage of big computers on the web doing stuff for us instead of having to depend on the much more limited resources of our local machine. This also means that improvements in script formats (as final formatting is done out on the web somewhere for you) are yours even if you haven't updated your local software. Q: Can we write movies with Celtx? A: With Celtx we can outline and write an entertainment industry standard feature movie script, short film, or animation—all properly formatted and ready to market. Q: Can we do other audio-visual projects with Celtx? A: Celtx's integral Audio-Visual editor is perfect for documentaries, commercials, public service spots, video tutorials, slide shows, light shows, or just about any other combination of visual and other content (not just sound). Q: Is Celtx equipped for audio plays and podcast? A: Celtx's Audio Play editor makes writing radio or other audio plays a breeze. It's perfect also for radio commercials or spots, and absolutely more than perfect for podcasts. Podcasts are easy to write, require minimal knowledge to produce, and are a snap to put on the Internet.
Read more
  • 0
  • 0
  • 2995
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-getting-started-alfresco-records-management-module
Packt
18 Jan 2011
7 min read
Save for later

Getting Started with the Alfresco Records Management Module

Packt
18 Jan 2011
7 min read
  Alfresco 3 Records Management Comply with regulations and secure your organization’s records with Alfresco Records Management. Successfully implement your records program using Alfresco Records Management, fully certified for DoD-5015.2 compliance The first and only book to focus exclusively on Alfresco Records Management Step-by-step instructions describe how to identify records, organize records, and manage records to comply with regulatory requirements Learn in detail about the software internals to get a jump-start on performing customizations Appendix     The Alfresco stack Alfresco software was designed for enterprise, and as such, supports a variety of different stack elements. Supported Alfresco stack elements include some of the most widely used operating systems, relational databases, and application servers. The core infrastructure of Alfresco is built on Java. This core provides the flexibility for the server to run on a variety of operating systems, like Microsoft Windows, Linux, Mac OS, and Sun Solaris. The use of Hibernate allows Alfresco to map objects and data from Java into almost any relational database. The databases that the Enterprise version of Alfresco software is certified to work with include Oracle, Microsoft SQL Server, MySQL, PostgresSQL, and DB2. Alfresco also runs on a variety of Application Servers that include Tomcat, JBoss, WebLogic, and WebSphere. Other relational databases and application servers may work as well, although they have not been explicitly tested and are also not supported. Details of which Alfresco stack elements are supported can be found on the Alfresco website: http://www.alfresco.com/services/subscription/supported-platforms/3-x/. Depending on the target deployment environment, different elements of the Alfresco stack may be favored over others. The exact configuration details for setting up the various stack element options is not discussed in this book. You can find ample discussion and details on the Alfresco wiki on how to configure, set up, and change the different stack elements. The version-specific installation and setup guides provided by Alfresco also contain very detailed information. The example description and screenshots given in this article are based on the Windows operating system. The details may differ for other operating systems, but you will find that the basic steps are very similar. Additional information on the internals of Alfresco software can be found on the Alfresco wiki at http://wiki.alfresco.com/wiki/Main_Page. Alfresco software As a first step to getting Alfresco Records Management up and running, we need to first acquire the software. Whether you plan to use either the Enterprise or the Community version of Alfresco, you should note that the Records Management module was not available until late 2009. The Records Management module was first certified with the 3.2 release of Alfresco Share. The first Enterprise version of Alfresco that supported Records Management was version 3.2R, which was released in February 2010. Make sure the software versions are compatible It is important to note that there was an early version of Records Management that was built for the Alfresco JSF-based Explorer client. That version was not certified for DoD 5015.2 compliance and is no longer supported by Alfresco. In fact, the Alfresco Explorer version of Records Management is not compatible with the Share version of Records Management, and trying to use the two implementations together can result in corrupt data. It is also important to make sure that the version of the Records Management module that you use matches the version of the base Alfresco Share software. For example, trying to use the Enterprise version of Records Management on a Community install of Alfresco will lead to problems, even if the version numbers are the same. The 3.3 Enterprise version of Records Management, as another example, is also not fully compatible with the 3.2R Enterprise version of Alfresco software. Downloading the Alfresco software The easiest way to get Alfresco Records Management up and running is by doing a fresh install of the latest available Alfresco software. Alfresco Community The Community version of Alfresco is a great place to get started. Especially if you are just interested in evaluating if Alfresco software meets your needs, and with no license fees to worry about, there's really nothing to lose in going this route. Since Alfresco Community software is constantly in the "in development" state and is not as rigorously tested, it tends to not be as stable as the Enterprise version. But, in terms of the Records Management module for the 3.2+ version releases of the software, the Community implementation is feature-complete. This means that the same Records Management features in the Enterprise version are also found in the Community version. The caveat with using the Community version is that support is only available from the Alfresco community, should you run across a problem. The Enterprise release also includes support from the Alfresco support team and may have bug fixes or patches not yet available for the community release. Also of note is the fact that there are other repository features beyond those of Records Management features, especially in the area of scalability, which are available only with the Enterprise release. Building from source code It is possible to get the most recent version of the Alfresco Community software by getting a snapshot copy of the source code from the publicly accessible Alfresco Subversion source code repository. A version of the software can be built from a snapshot of the source code taken from there. But unless you are anxiously waiting for a new Alfresco feature or bug fix and need to get your hands immediately on a build with that new code included as part of it, for most people, building from source is probably not the route to go. Building from source code can be time consuming and error prone. The final software version that you build can often be very buggy or unstable due to code that has been checked-in prematurely or changes that might be in the process of being merged into the Community release, but which weren't completely checked-in at the time you updated your snapshot of the code base. If you do decide that you'd like to try to build Alfresco software from source code, details on how to get set up to do that can be found on the Alfresco wiki: http://wiki.alfresco.com/wiki/Alfresco_SVN_Development_Environment. Download a Community version snapshot build Builds of snapshots of the Alfresco Community source code are periodically taken and made available for download. Using a pre-built Community version of Alfresco software saves you much hassle and headaches from not having to do the build from scratch. While not thoroughly tested, the snapshot Community builds have been tested sufficiently so that they tend to be stable enough to see most of the functionality available for the release, although not everything may be working completely. Links to the most recent Alfresco Community version builds can be found on the Alfresco wiki: http://wiki.alfresco.com/wiki/Download_Community_Edition. Alfresco Enterprise The alternative to using Alfresco open source Community software is the Enterprise version of Alfresco. For most organizations, the fully certified Enterprise version of Alfresco software is the recommended choice. The Enterprise version of Alfresco software has been thoroughly tested and is fully supported. Alfresco customers and partners have access to the most recent Enterprise software from the Alfresco Network site: http://network.alfresco.com/. Trial copies of Alfresco Enterprise software can be downloaded from the Alfresco site: http://www.alfresco.com/try/. Time-limited access to on-demand instances of Alfresco software are also available and are a great way to get a good understanding of how Alfresco software works.
Read more
  • 0
  • 0
  • 2991

article-image-wordpress-3-building-widget
Packt
06 May 2011
10 min read
Save for later

WordPress 3: Building a Widget

Packt
06 May 2011
10 min read
  WordPress 3 Plugin Development Essentials Create your own powerful, interactive plugins to extend and add features to your WordPress site         Read more about this book       (For more resources on WordPress, see here.) The plan What exactly do we want this plugin to do? Our widget will display a random bit of text from the database. This type of plugin is frequently used for advertisement rotations or in situations where you want to spruce up a page by rotating content on a periodic basis. Each instance of our plugin will also have a "shelf life" that will determine how frequently its content should be randomized. Let's take a moment to come up with some specifications for this plugin. We want it to do the following: Store multiple chunks of content, such as bits of Google Adsense code Be able to randomly return one of the chunks Set a time limit that defines the "shelf life" of each chunk, after which the "random" chunk will be updated Widget overview Even if you are already familiar with widgets, take a moment to look at how they work in the WordPress manager under Appearance | Widgets. You know that they display content on the frontend of your site, usually in a sidebar, and they also have text and control forms that are displayed only when you view them inside the manager. If you put on your thinking cap, this should suggest to you at least two actions: an action that displays the content on the frontend, and an action that displays the form used to update the widget settings inside the manager. There are actually a total of four actions that determine the behavior of a standard widget, and you can think of these functions as a unit because they all live together in a single widget object. In layman's terms, there are four things that any widget can do. In programmatic terms, the WP_Widget object class has four functions that you may implement: The constructor: The constructor is the only function that you must implement. When you "construct" your widget, you give it a name, a description, and you define what options it has. Its name is often __ construct(), but PHP still accepts the PHP 4 method of naming your constructor function using the name of the class. widget(): It displays content to users on the frontend. form(): It displays content to manager users on the backend, usually to allow them to update the widget settings. update(): It prepares the updated widget settings for database storage. Override this function if you require special form validation. In order to make your widget actually work, you will need to tell WordPress about it by registering it using the WordPress register_widget() function. If you want to get a bit more information about the process, have a look at WordPress' documentation here. Let's outline this in code so you can see how it works. Preparation As always, we're going to go through the same setup steps to get our plugin outlined so we can get it activated and tested as soon as possible. Create a folder inside the wp-content/plugins/ directory. We are naming this plugin "Content Rotator", so create a new folder inside wp-content/ plugins/ named content-rotator. Create the index.php file. Add the Information Head to the index.php file. If you forgot the format, just copy and modify it from the Hello Dolly plugin like we did in the previous article, Anatomy of a WordPress Plugin. We're giving you bigger sections of code than before because hopefully by now you're more comfortable adding and testing them. Here is what our index.php looks like: <?php/*------------------------------------------------------------------------------Plugin Name: Content RotatorPlugin URI: http://www.tipsfor.us/Description: Sample plugin for rotating chunks of custom content.Author: Everett GriffithsVersion: 0.1Author URI: http://www.tipsfor.us/------------------------------------------------------------------------------*/// include() or require() any necessary files here...include_once('includes/ContentRotatorWidget.php');// Tie into WordPress Hooks and any functions that should run onload.add_action('widgets_init', 'ContentRotatorWidget::register_this_widget');/* EOF */ Add the folders for includes and tpls to help keep our files organized. Add a new class file to the includes directory. The file should be named ContentRotatorWidget.php so it matches the include statement in the index.php file. This is a subclass which extends the parent WP_Widget class. We will name this class ContentRotatorWidget, and it should be declared using the extends keyword. <?php /*** ContentRotatorWidget extends WP_Widget** This implements a WordPress widget designed to randomize chunksof content.*/class ContentRotatorWidget extends WP_Widget { public $name = 'Content Rotator'; public $description = 'Rotates chunks of content on a periodic basis'; /* List all controllable options here along with a default value. The values can be distinct for each instance of the widget. */ public $control_options = array(); //!!! Magic Functions // The constructor. function __construct() { $widget_options = array( 'classname' => __CLASS__, 'description' => $this->widget_desc, ); parent::__construct( __CLASS__, $this->name,$widget_ options,$this->control_options); } //!!! Static Functions static function register_this_widget() { register_widget(__CLASS__); } } /* EOF */ This is the simplest possible widget—we constructed it using only the __ construct() function. We haven't implemented any other functions, but we are supplying enough information here for it to work. Specifically, we are supplying a name and a description, and that's enough to get started. Let's take a moment to explain everything that just happened, especially since the official documentation here is a bit lacking. When we declared the ContentRotatorWidget class, we used the extends keyword. That's what makes this PHP class a widget, and that's what makes object-oriented code so useful. The __construct() function is called when an object is first created using the new command, so you might expect to see something like the following in our index. php file: <?php $my_widget = new ContentRotatorWidget();?> However, WordPress has obscured that from us—we just have to tell WordPress the classname of the widget we want to register via the register_widget() function, and it takes care of rest by creating a new instance of this ContentRotatorWidget. There is a new instance being created, we just don't see it directly. Some of the official documentation still uses PHP 4 style examples of the constructor function— that is to say that the function whose name shares the name of the class. We feel that naming the constructor function __construct is clearer. You may have wondered why we didn't simply put the following into our index.php file: register_widget('ContentRotatorWidget'); // may throw errors if called too soon! If you do that, WordPress will try to register the widget before it's ready, and you'll get a fatal error: "Call to a member function register() on a non-object". That's why we delay the execution of that function by hooking it to the widgets_ init action. We are also tying into the construct of the parent class via the parent::__ construct() function call. We'll explain the hierarchy in more detail later, but "parent" is a special keyword that can be used by a child class in order to call functions in the parent class. In this case, we want to tie into the WP_Widget __ construct() function in order to properly instantiate our widget. Note our use of the PHP __CLASS__ constant—its value is the class name, so in this case, we could replace it with ContentRotatorWidget, but we wanted to provide you with more reusable code. You're welcome. Lastly, have a look at the class variables we have declared at the top of the class: $name, $description, and $control_options. We have put them at the top of the class for easy access, then we have referenced them in the __construct() function using the $this variable. Note the syntax here for using class variables. We are declaring these variables as class variables for purposes of scope: we want their values to be available throughout the class. Please save your work before continuing. Activating your plugin Before we can actually use our widget and see what it looks like, we first have to activate our plugin in the manager. Your widget will never show up in the widget administration area if the plugin is not active! Just to be clear, you will have to activate two things: your plugin, and then your widget. The code is simple enough at this point for you to be able to quickly track down any errors. Activating the widget Now that the plugin code is active, we should see our widget show up in the widget administration area: Appearance | Widgets. Take a moment to notice the correlation between the widget's name and description and how we used the corresponding variables in our constructor function. Drag your widget into the primary widget area to make it active. Once it has been activated, refresh your homepage. Your active widget should print some text in the sidebar. (Move the mouse over the image to enlarge it.) Congratulations! You have created your first WordPress widget! It is printing a default message: function WP_Widget::widget() must be over-ridden in a sub-class, which is not very exciting, but technically speaking you have a functional widget. We still need to enable our widget to store some custom options, but first we should ensure that everything is working correctly. Having problems? No widget? If you have activated your plugin, but you do not see your widget showing up in the widget administration area, make sure you have tied into a valid WordPress action! If you misspell the action, it will not get called! The action we are using in our index.php is widgets_init—don't forget the "s"! White screen? Even if you have PHP error-reporting enabled, sometimes you suddenly end up with a completely white screen in both the frontend and in the manager. If there is a legitimate syntax error, that displays just fine, but if your PHP code is syntactically correct, you end up with nothing but a blank page. What's going on? A heavy-handed solution for when plugins go bad is to temporarily remove your plugin's folder from /wp-content/plugins, then refresh the manager. WordPress is smart enough to deactivate any plugin that it cannot find, so it can recover from this type of surgery. If you are experiencing the "White Screen of Death", it usually means that something is tragically wrong with your code, and it can take a while to track it down because each time you deactivate the plugin by removing the folder, you have to reactivate it by moving the folder back and reactivating the plugin in the WordPress manager. This unenviable situation can occur if you accidentally chose a function name that was already in use (for example, register()—don't use that as a function name). You have to be especially careful of this when you are extending a class because you may inadvertently override a vital function when you meant to create a new one. If you think you may have done this, drop and do 20 push-ups and then have a look at the original parent WP_Widget class and its functions in wp-includes/widgets.php. Remember that whenever you extend a class, it behooves you to look at the parent class' functions and their inputs and outputs. If this sounds like Greek to you, then the next section is for you.
Read more
  • 0
  • 0
  • 2990

article-image-trunks-using-3cx-part-2
Packt
12 Feb 2010
7 min read
Save for later

Trunks using 3CX: Part 2

Packt
12 Feb 2010
7 min read
The next wizard screen is for Outbound Call Rules. Let's go over it enough so that you can setup a simple rule. We start off with a name. This can be anything you like but I prefer something meaningful. For our example I want to dial 9 to use the analog line, and only allow extensions 100-102 to use this line. I also only want to be able to dial certain phone numbers. Then I have to delete the 9 before it goes out to the phone carrier. Let's have a look at each section of this screen: Calls to numbers starting with (Prefix) This is where you specify what you want someone to dial before the line is used. You could enter a string of numbers here to use as a "password" to dial out. You don't just let anyone call an international phone number, so set this to a string of numbers to use as your international password. Give the password only to those who need it. Just make sure you change it occasionally in case it slips out. Calls from extension(s) Now, you can specify who (by extension number) can use this gateway. Just enter the extension number(s) you want to allow either in a range (100-110), individually (100, 101, 104), or as a mix (100-103, 110). Usually, you will leave this open for everyone to use; otherwise, you will restrict extensions that were allowed to use the gateway, which will have repercussions of forwarding rules to external numbers. Calls to numbers with a length of This setting can be left blank if you want all calls to be able to go out on this gateway. In the next screenshot, I specified 3, 7, 10, and 11. This covers calls to 911, 411, 555-1234, 800-555-1234, and 1-800-555-1234, respectively. You can control what phone numbers go out based on the number of digits that are dialed. Route and strip options Since this is our only gateway right now, we will have it route the calls to the Patton gateway. The Strip Digits option needs to be set to 1. This will strip out the "9" that we specified above to dial out with. We can leave the Prepend section blank for now. Now, go ahead and click Finish: Once you click Finish, you will see a gateway wizard summary, as shown in the next screenshot. This shows you that the gateway is created, and it also gives an overview of the settings. Your next step is to get those settings configured on your gateway. There is a list of links for various supported gateways on the bottom of the summary page with up-to-date instructions. Feel free to visit those links. These links will take you to the 3CX website and explain how to configure that particular gateway. With Patton this is easy; click the Generate config file button. The only other information you need for the configuration file is the Subnet mask for the Patton gateway. Enter your network subnet mask in the box. Here, I entered a standard Class C subnet mask. This matches my 192.168.X.X network. Click OK when you are done: Once you click OK, your browser will prompt you to save the file, as shown in the following screenshot. Click Save: The following screenshot shows a familiar Save As Windows screen. I like to put this file in an easy-to-remember location on my hard drive. As I already have a 3CX folder created, I'm going to save the file there. You can change the name of the file if you wish. Click Save: Now that your file is saved, let's take a look at modifying those settings. Open the administration web interface and, on the left-hand side, click PSTN Devices. Go ahead and expand this by clicking the + sign next to it. Now, you will see our newly created Patton SN4114A gateway listed. Click the + sign again and expand that gateway. Next, click the Patton SN4114A name, and you will see the right-hand side window pane fill up with five separate tabs. The first tab is General. This is where you can change the gateway IP address, SIP port, and all the account details. If you change anything, you will need a new configuration file. So click the Generate config file button at the bottom of the screen. If you forgot to save the file previously, here's your chance to generate and save it again: On the Advanced tab, we have some Provider Capabilities. Leave these settings alone for now: We will leave the rest of the tabs for now. Go ahead and click the 10000 line information in the navigation pane on the left. These are the settings for that particular phone port (10000). The first group of settings that we can change is the authentication username and password. Remember, this is to register the line with 3CX and not to use the phone line. The next two sections are about what to do with an inbound call during Office Hours and Outside Office Hours. I didn't change anything from the gateway wizard but, on this screen, you can see that we selected Ring group 800 MainRingGroup. This is the Ring group that we configured previously. We also see similar drop-down boxes for Outside Office Hours. As no one will be in the office to answer the phone, I've selected a Digital Receptionist 801 DR1. In the section Other Options, the Outbound Caller ID box is used to enter what you would like to have presented to the outside world as caller ID information. If your phone carrier supports this, you can enter a phone number or a name. If the carrier does not support this, just leave it blank and talk to your carrier as to what you would require to have it assigned as your caller ID. The Allow outbound calls on this line and Allow incoming calls on this line checkboxes are used to limit calls in or out. Depending on your environment, you might want to leave one line selected as no outbound calls. This will always leave an incoming line for customers to call. Otherwise, unless you have other lines that they can call on, they will get a busy signal. Maximum simultaneous calls cannot be changed here as analog lines only support one call at a time. If you changed anything, click Apply and then go back and generate a new configuration file: For the most up-to-date information on configuring your gateway, visit the 3CX site: http://www.3cx.com/voip-gateways/index.html We will go over a summary of it here: Since nothing was changed, it is now time to configure the Patton device with the config file that we generated from the 3CX template. If you know the IP address of the device, go ahead and open a browser and navigate to that IP address. Mine would be http://192.168.2.10. If you do not know the IP address of your device, you will need the SmartNode discovery tool. The easiest place to get this tool is the CD that came with the device. You can also download it from http://www.3cx.com/downloads/misc/sndiscovery.zip, or search the Patton website for it. Go ahead and install the SmartNode discovery tool and run it. You will get a screen that tells you all the SmartNodes on your network with their IP address, MAC address, and firmware version. Double-click on the SmartNode to open the web interface in a browser. The default username is administrator, and the password field is left blank. Click Import/Export on the left and Import Configuration on the right. Click Browse to find the configuration file that we generated. Click Import and then Reload to restart the gateway with the new configuration. That's it . We can now get incoming calls and make an outbound call.
Read more
  • 0
  • 0
  • 2986

article-image-using-jquery-and-jqueryui-widget-factory-plugins-requirejs
Packt
18 Jun 2013
5 min read
Save for later

Using jQuery and jQueryUI Widget Factory plugins with RequireJS

Packt
18 Jun 2013
5 min read
(For more resources related to this topic, see here.) How to do it... We must declare the jquery alias name within our Require.js configuration file. require.config({// 3rd party script alias namespaths: {// Core Libraries// --------------// jQuery"jquery": "libs/jquery",// Plugins// -------"somePlugin": "libs/plugins/somePlugin"}}); If a jQuery plugin does not register itself as AMD compatible, we must also create a Require.js shim configuration to make sure Require.js loads jQuery before the jQuery plugin. shim: {// Twitter Bootstrap plugins depend on jQuery"bootstrap": ["jquery"]} We will now be able to dynamically load a jQuery plugin with the require() method. // Dynamically loads a jQuery plugin using the require() methodrequire(["somePlugin"], function() {// The callback function is executed after the pluginis loaded}); We will also be able to list a jQuery plugin as a dependency to another module. // Sample file// -----------// The define method is passed a dependency array and a callbackfunctiondefine(["jquery", "somePlugin"], function ($) {// Wraps all logic inside of a jQuery.ready event$(function() {});}); When using a jQueryUI Widget Factory plugin, we create Require.js path names for both the jQueryUI Widget Factory and the jQueryUI Widget Factory plugin: "jqueryui": "libs/jqueryui","selectBoxIt": "libs/plugins/selectBoxIt" Next, create a shim configuration property: // The jQueryUI Widget Factory depends on jQuery"jqueryui": ["jquery"],// The selectBoxIt plugin depends on both jQuery and the jQueryUIWidget Factory"selectBoxIt": ["jqueryui"] We will now be able to dynamically load the jQueryUI Widget Factory plugin with the require() method: // Dynamically loads the jQueryUI Widget Factory plugin, selectBoxIt,using the Require() methodrequire(["selectBoxIt"], function() {// The callback function is executed after selectBoxIt.js(and all of its dependencies) have been loaded}); We will also be able to list the jQueryUI Widget Factory plugin as a dependency to another module: // Sample file// -----------// The define method is passed a dependency array and a callbackfunctiondefine(["jquery", "selectBoxIt"], function ($) {// Wraps all logic inside of a jQuery.ready event$(function() {});}); How it works... Luckily for us, jQuery adheres to the AMD specification and registers itself as a named AMD module. If you are confused about how/why they are doing that, let's take a look at the jQuery source: // Expose jQuery as an AMD moduleif ( typeof define === "function" && define.amd && define.amd.jQuery ){define( "jquery", [], function () { return jQuery; } );} jQuery first checks to make sure there is a global define() function available on the page. Next, jQuery checks if the define function has an amd property, which all AMD loaders that adhere to the AMD API should have. Remember that in JavaScript, functions are first class objects, and can contain properties. Finally, jQuery checks to see if the amd property contains a jQuery property, which should only be there for AMD loaders that understand the issues with loading multiple versions of jQuery in a page that all might call the define() function. Essentially, jQuery is checking that an AMD script loader is on the page, and then registering itself as a named AMD module (jquery). Since jQuery exports itself as the named AMD module, jquery, you must use this exact name when setting the path configuration to your own version of jQuery, or Require.js will throw an error. If a jQuery plugin registers itself as an anonymous AMD module and jQuery is also listed with the proper lowercased jquery alias name within your Require.js configuration file, using the plugin with the require() and define() methods will work as you expect. Unfortunately, most jQuery plugins are not AMD compatible, and do not wrap themselves in an optional define() method and list jquery as a dependency. To get around this issue, we can use the Require.js shim object configuration like we have seen before to tell Require. js that a file depends on jQuery. The shim configuration is a great solution for jQuery plugins that do not register themselves as AMD modules. Unfortunately, unlike jQuery, the jQueryUI does not currently register itself as a named AMD module, which means that plugin authors that use the jQueryUI Widget Factory cannot provide AMD compatibility. Since the jQueryUI Widget Factory is not AMD compatible, we must use a workaround involving the paths and shim configuration objects to properly define the plugin as an AMD module. There's more... You will most likely always register your own files as anonymous AMD modules, but jQuery is a special case. Registering itself as a named AMD module allows other third-party libraries that depend on jQuery, such as jQuery plugins, to become AMD compatible by calling the define() method themselves and using the community agreed upon module name, jquery, to list jQuery as a dependency. Summary This article demonstrates how to use jQuery and jQueryUI Widget Factory plugins with Require.js. Resources for Article : Further resources on this subject: So, what is KineticJS? [Article] HTML5 Presentations - creating our initial presentation [Article] Tips & Tricks for Ext JS 3.x [Article]
Read more
  • 0
  • 0
  • 2982
article-image-building-news-aggregating-site-joomla
Packt
26 May 2010
3 min read
Save for later

Building a News Aggregating Site in Joomla!

Packt
26 May 2010
3 min read
(For more resources on Joomla!, see here.) The completed news aggregation site will look similar to the example shown in the following screenshot: Build Weird Hap'nins Vaughan Pyre is a very ambitious webpreneur. What he really hopes for is a website that is completely self-maintaining, and on which he can place some Google AdSense blocks. Clicks from the visitors to his site will ensure that he makes lots of money. For this, he needs a site where the content updates regularly with fresh content so that visitors will keep coming back to click on some more Google ads. Vaughan's ultimate objective is to create several of these websites. Template The template chosen is Midnight by BuyHTTP, which is a template that fits the theme of this unique website. Extensions This is, surprisingly, a very simple site to build, and much of the requirements can be actually be achieved by using the native News Feeds component. However, the News Feeds component will only list the title links to the external feed items, whereas what Vaughan wants is that the feeds are pulled into the site as articles. Therefore, we will be using an automatic article generator component. There are several of such components on the Joomla! extensions site, but almost all of them are commercial. Vaughan is a skinflint and will not pay to buy any script, so what we are looking for is free component. That is why we have chosen the following: 4RSS—aggregates RSS feeds and creates articles from them JCron Scheduler—used for cron jobs management and scheduling to simulate cron jobs through the Joomla! frontend interface at preset intervals Indeed, were it not for the fact that Vaughan needs the content to automatically be updated, we needn't use any extension other than the 4RSS component. Other extensions The core module that will be used for this site is: Main Menu module—creates the primary navigation functionality for the site pages Sections and categories New sections and categories will need to be created so that incoming article feeds will be correctly routed according to their description. A new section will be created that we will call Feed. Under this section, we shall have three categories—Bad News, More Bad News, and Weird News. Create a new section We intend to create a section that will be named Feed. In order to do this, perform the following steps: Navigate to the Section Manager from the Control Panel, and then click on the New icon at the top right-hand side, in order to create a new section. On the next page, add the name of the section, and then save your changes. Create new categories To create a new category, perform the following steps: Navigate to the Category Manager page from the Control Panel. On the following page, create a new category in the same way as we created the new section. However, remember to set the Section to Feed.
Read more
  • 0
  • 0
  • 2981

article-image-hands-tutorial-ejb-31-security
Packt
15 Jun 2011
9 min read
Save for later

Hands-on Tutorial on EJB 3.1 Security

Packt
15 Jun 2011
9 min read
EJB 3.1 Cookbook Security is an important aspect of many applications. Central to EJB security is the control of access to classes and methods. There are two approaches to controlling access to EJBs. The first, and the simplest, is through the use of declarative annotations to specify the types of access permitted. The second approach is to use code to control access to the business methods of an EJB. This second approach should not be used unless the declarative approach does not meet the needs of the application. For example, access to a method may be denied during certain times of the day or during certain maintenance periods. Declarative security is not able to handle these types of situations. In order to incorporate security into an application, it is necessary to understand the Java EE environment and its terminology. The administration of security for the underlying operating system is different from that provided by the EE server. The EE server is concerned with realms, users and groups. The application is largely concerned with roles. The roles need to be mapped to users and groups of a realm for the application to function properly. A realm is a domain for a server that incorporates security policies. It possesses a set of users and groups which are considered valid users of an application. A user typically corresponds to an individual while a group is a collection of individuals. Group members frequently share a common set of responsibilities. A Java EE server may manage multiple realms. An application is concerned with roles. Access to EJBs and their methods is determined by the role of a user. Roles are defined in such a manner as to provide a logical way of deciding which users/groups can access which methods. For example, a management type role may have the capability to approve a travel voucher whereas an employee role should not have that capability. By assigning certain users to a role and then specifying which roles can access which methods, we are able to control access to EJBs. The use of groups makes the process of assigning roles easier. Instead of having to map each individual to a role, the user is assigned to a group and the group is mapped to a role. The business code does not have to check every individual. The Java EE server manages the assignment of users to groups. The application needs only be concerned with controlling a group's access. A group is a server level concept. Roles are application level. One group can be associated with multiple applications. For example, a student group may use a student club and student registration application while a faculty group might also use the registration application but with more capability. A role is simply a name for a set of capabilities. For example, an auditor role may be to review and certify a set of accounts. This role would require read access to many, if not all, of the accounts. However, modification privileges may be restricted. Each application has its own set of roles which have been defined to meet the security needs of the application. The EE server manages realms consisting of users, groups, and resources. The server will authenticate users using Java's underlying security features. The user is then referred to as a principal and has a credential containing the user's security attributes. During the deployment of an application, users and groups are mapped to roles of the application using a deployment descriptor. The configuration of the deployment descriptor is normally the responsibility of the application deployer. During the execution of the application, the Java Authentication and Authorization Service (JAAS) API authenticates a user and creates a principal representing the user. The principal is then passed to an EJB. Security in a Java EE environment can be viewed from different perspectives. When information is passed between clients and servers, transport level security comes into play. Security at this level can include Secure HTTP (HTTPS) and Secure Sockets Layer (SSL). Messages can be sent across a network in the form of Simple Object Access Protocol (SOAP) messages. These messages can be encrypted. The EE container for EJBs provides application level security which is the focus of the chapter. Most servers provide unified security support between the web container and the EJB container. For example, calls from a servlet in a web container to an EJB are handled automatically resulting in a flexible security mechanism. Most of the recipes presented in this article are interrelated. If your intention is to try out the code examples, then make sure you cover the first two recipes as they provide the framework for the execution of the other recipes. In the first recipe, Creating the SecurityApplication, we create the foundation application for the remaining recipes. In the second recipe, Configuring the server to handle security, the basic steps needed to configure security for an application are presented. The use of declarative security is covered in the Controlling security using declarations recipe while programmatic security is discussed in the next article on Controlling security programmatically. The Understanding and declaring roles recipe examines roles in more detail and the Propagating identity recipe talks about how the identity of a user is managed in an application. Creating the SecurityApplication In this article, we will create a SecurityApplication built around a simple Voucher entity to persist travel information. This is a simplified version of an application that allows a user to submit a voucher and for a manager to approve or disapprove it. The voucher entity itself will hold only minimal information. Getting ready The illustration of security will be based on a series of classes: Voucher – An entity holding travel-related information VoucherFacade – A facade class for the entity AbstractFacade – The base class of the VoucherFacade VoucherManager – A class used to manage vouchers and where most of the security techniques will be demonstrated SecurityServlet – A servlet used to drive the demonstrations All of these classes will be members of the packt package in the EJB module except for the servlet which will be placed in the servlet package of the WAR module. How to do it... Create a Java EE application called SecurityApplication with an EJB and a WAR module. Add a packt package to the EJB module and an entity called Voucher to the package. Add five private instance variables to hold a minimal amount of travel information: name, destination, amount, approved, and an id. Also, add a default and a three argument constructor to the class to initialize the name, destination, and amount fields. The approved field is also set to false. The intent of this field is to indicate whether the voucher has been approved or not. Though not shown below, also add getter and setter methods for these fields. You may want to add other methods such as a toString method if desired. @Entity public class Voucher implements Serializable { private String name; private String destination; private BigDecimal amount; private boolean approved; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; public Voucher() { } public Voucher(String name, String destination, BigDecimal amount) { this.name = name; this.destination = destination; this.amount = amount; this.approved = false; } ... } Next, add an AbstractFacade class and a VoucherFacade class derived from it. The VoucherFacade class is shown below. As with other facade classes found in previous chapters, the class provides a way of accessing an entity manager and the base class methods of the AbstractFacade class. @Stateless public class VoucherFacade extends AbstractFacade<Voucher> { @PersistenceContext(unitName = "SecurityApplication-ejbPU") private EntityManager em; protected EntityManager getEntityManager() { return em; } public VoucherFacade() { super(Voucher.class); } } Next, add a stateful EJB called VoucherManager. Inject an instance of the VoucherFacade class using the @EJB annotation. Also add an instance variable for a Voucher. We need a createVoucher method that accepts a name, destination, and amount arguments, and then creates and subsequently persists the Voucher. Also, add get methods to return the name, destination, and amount of the voucher. @Stateful public class VoucherManager { @EJB VoucherFacade voucherFacade; Voucher voucher; public void createVoucher(String name, String destination, BigDecimal amount) { voucher = new Voucher(name, destination, amount); voucherFacade.create(voucher); } public String getName() { return voucher.getName(); } public String getDestination() { return voucher.getDestination(); } public BigDecimal getAmount() { return voucher.getAmount(); } ... } Next add three methods: submit – This method is intended to be used by an employee to submit a voucher for approval by a manager. To help explain the example, display a message showing when the method has been submitted. approve – This method is used by a manager to approve a voucher. It should set the approved field to true and return true. reject – This method is used by a manager to reject a voucher. It should set the approved field to false and return false. @Stateful public class VoucherManager { ... public void submit() { System.out.println("Voucher submitted"); } public boolean approve() { voucher.setApproved(true); return true; } public boolean reject() { voucher.setApproved(false); return false; } } To complete the application framework, add a package called servlet to the WAR module and a servlet called SecurityServlet to the package. Use the @EJB annotation to inject a VoucherManager instance field into the servlet. In the try block of the processRequest method, add code to create a new voucher and then use the submit method to submit it. Next, display a message indicating the submission of the voucher. public class SecurityServlet extends HttpServlet { @EJB VoucherManager voucherManager; protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); try { voucherManager.createVoucher("Susan Billings", "SanFrancisco", BigDecimal.valueOf(2150.75)); voucherManager.submit(); out.println("<html>"); out.println("<head>"); out.println("<title>Servlet SecurityServlet</title>"); out.println("</head>"); out.println("<body>"); out.println("<h3>Voucher was submitted</h3>"); out.println("</body>"); out.println("</html>"); } finally { out.close(); } } ... } Execute the SecurityServlet. Its output should appear as shown in the following screenshot: How it works... In the Voucher entity, notice the use of BigDecimal for the amount field. This java.math package class is a better choice for currency data than float or double. Its use avoids problems which can occur with rounding. The @GeneratedValue annotation, used with the id field, is for creating an entity facade. In the VoucherManager class, notice the injection of the stateless VoucherFacade session EJB into a stateful VoucherManager EJB. Each invocation of a VoucherFacade method may result in the method being executed against a different instance of VoucherManager. This is the correct use of a stateless session EJB. The injection of a stateful EJB into a stateless EJB is not recommended.  
Read more
  • 0
  • 0
  • 2977

article-image-introducing-web-components
Packt
19 May 2015
16 min read
Save for later

Introducing Web Components

Packt
19 May 2015
16 min read
In this article by Sandeep Kumar Patel, author of the book Learning Web Component Development, we will learn about the web component specification in detail. Web component is changing the web application development process. It comes with standard and technical features, such as templates, custom elements, Shadow DOM, and HTML Imports. The main topics that we will cover in this article about web component specification are as follows: What are web components? Benefits and challenges of web components The web component architecture Template element HTML Import (For more resources related to this topic, see here.) What are web components? Web components are a W3C specification to build a standalone component for web applications. It helps developers leverage the development process to build reusable and reliable widgets. A web application can be developed in various ways, such as page focus development and navigation-based development, where the developer writes the code based on the requirement. All of these approaches fulfil the present needs of the application, but may fail in the reusability perspective. This problem leads to component-based development. Benefits and challenges of web components There are many benefits of web components: A web component can be used in multiple applications. It provides interoperability between frameworks, developing the web component ecosystem. This makes it reusable. A web component has a template that can be used to put the entire markup separately, making it more maintainable. As web components are developed using HTML, CSS, and JavaScript, it can run on different browsers. This makes it platform independent. Shadow DOM provides encapsulation mechanism to style, script, and HTML markup. This encapsulation mechanism provides private scope and prevents the content of the component being affected by the external document. Equally, some of the challenges for a web component include: Implementation: The W3C web component specification is very new to the browser technology and not completely implemented by the browsers. Shared resource: A web component has its own scoped resources. There may be cases where some of the resources between the components are common. Performance: Increase in the number of web components takes more time to get used inside the DOM. Polyfill size: The polyfill are a workaround for a feature that is not currently implemented by the browsers. These polyfill files have a large memory foot print. SEO: As the HTML markup present inside the template is inert, it creates problems in the search engine for the indexing of web pages. The web component architecture The W3C web component specification has four main building blocks for component development. Web component development is made possible by template, HTML Imports, Shadow DOM, and custom elements and decorators. However, decorators do not have a proper specification at present, which results in the four pillars of web component paradigm. The following diagram shows the building blocks of web component: These four pieces of technology power a web component that can be reusable across the application. In the coming section, we will explore these features in detail and understand how they help us in web component development. Template element The HTML <template> element contains the HTML markup, style, and script, which can be used multiple times. The templating process is nothing new to a web developer. Handlebars, Mustache, and Dust are the templating libraries that are already present and heavily used for web application development. To streamline this process of template use, W3C web component specification has included the <template> element. This template element is very new to web development, so it lacks features compared to the templating libraries such as Handlebars.js that are present in the market. In the near future, it will be equipped with new features, but, for now, let's explore the present template specification. Template element detail The HTML <template> element is an HTMLTemplateElement interface. The interface definition language (IDL) definition of the template element is listed in the following code: interface HTMLTemplateElement : HTMLElement {readonly attribute DocumentFragment content;}; The preceding code is written in IDL language. This IDL language is used by the W3C for writing specification. Browsers that support HTML Import must implement the aforementioned IDL. The details of the preceding code are listed here: HTMLTemplateElement: This is the template interface and extends the HTMLElement class. content: This is the only attribute of the HTML template element. It returns the content of the template and is read-only in nature. DocumentFragment: This is a return type of the content attribute. DocumentFragment is a lightweight version of the document and does not have a parent. To find out more about DocumentFargment, use the following link: https://developer.mozilla.org/en/docs/Web/API/DocumentFragment Template feature detection The HTML <template> element is very new to web application development and not completely implemented by all browsers. Before implementing the template element, we need to check the browser support. The JavaScript code for template support in a browser is listed in the following code: <!DOCTYPE html><html><head lang="en"><meta charset="UTF-8"><title>Web Component: template support</title></head><body><h1 id="message"></h1><script>var isTemplateSupported = function () {var template = document.createElement("template");return 'content' in template;};var isSupported = isTemplateSupported(),message = document.getElementById("message");if (isSupported) {message.innerHTML = "Template element is supported by thebrowser.";} else {message.innerHTML = "Template element is not supported bythe browser.";}</script></body></html> In the preceding code, the isTemplateSupported method checks the content property present inside the template element. If the content attribute is present inside the template element, this method returns either true or false. If the template element is supported by the browser, the h1 element will show the support message. The browser that is used to run the preceding code is Chrome 39 release. The output of the preceding code is shown in following screenshot: The preceding screenshot shows that the browser used for development is supporting the HTML template element. There is also a great online tool called "Can I Use for checking support for the template element in the current browser. To check out the template support in the browser, use the following link: http://caniuse.com/#feat=template The following screenshot shows the current status of the support for the template element in the browsers using the Can I Use online tool: Inert template The HTML content inside the template element is inert in nature until it is activated. The inertness of template content contributes to increasing the performance of the web application. The following code demonstrates the inertness of the template content: <!DOCTYPE html><html><head lang="en"><meta charset="UTF-8"><title>Web Component: A inert template content example.</title></head><body><div id="message"></div><template id="aTemplate"><img id="profileImage"src="http://www.gravatar.com/avatar/c6e6c57a2173fcbf2afdd5fe6786e92f.png"><script>alert("This is a script.");</script></template><script>(function(){var imageElement =document.getElementById("profileImage"),messageElement = document.getElementById("message");messageElement.innerHTML = "IMG element "+imageElement;})();</script></body></html> In the preceding code, a template contains an image element with the src attribute, pointing to a Gravatar profile image, and an inline JavaScript alert method. On page load, the document.getElementById method is looking for an HTML element with the #profileImage ID. The output of the preceding code is shown in the following screenshot: The preceding screenshot shows that the script is not able to find the HTML element with the profileImage ID and renders null in the browser. From the preceding screenshot it is evident that the content of the template is inert in nature. Activating a template By default, the content of the <template> element is inert and are not part of the DOM. The two different ways that can be used to activate the nodes are as follows: Cloning a node Importing a node Cloning a node The cloneNode method can be used to duplicate a node. The syntax for the cloneNode method is listed as follows: <Node> <target node>.cloneNode(<Boolean parameter>) The details of the preceding code syntax are listed here: This method can be applied on a node that needs to be cloned. The return type of this method is Node. The input parameter for this method is of the Boolean type and represents a type of cloning. There are 2 different types of cloning, listed as follows: Deep cloning: In deep cloning, the children of the targeted node also get copied. To implement deep cloning, the Boolean input parameter to cloneNode method needs to be true. Shallow cloning: In shallow cloning, only the targeted node is copied without the children. To implement shallow cloning the Boolean input parameter to cloneNode method needs to be false. The following code shows the use of the cloneNode method to copy the content of a template, having the h1 element with some text: <!DOCTYPE html><html><head lang="en"><meta charset="UTF-8"><title>Web Component: Activating template using cloneNode method</title></head><body><div id="container"></div><template id="aTemplate"><h1>Template is activated using cloneNode method.</h1></template><script>var aTemplate = document.querySelector("#aTemplate"),container = document.getElementById("container"),templateContent = aTemplate.content,activeContent = templateContent.cloneNode(true);container.appendChild(activeContent);</script></body></html> In the preceding code, the template element has the aTemplate ID and is referenced using the querySelector method. The HTML markup content inside the template is then retrieved using a content property and saved in a templateContent variable. The cloneNode method is then used for deep cloning to get the activated node that is later appended to a div element. The following screenshot shows the output of the preceding code: To find out more about the cloneNode method visit: https://developer.mozilla.org/en-US/docs/Web/API/Node.cloneNode Importing a node The importNode method is another way of activating the template content. The syntax for the aforementioned method is listed in the following code: <Node> document.importNode(<target node>,<Boolean parameter>) The details of the preceding code syntax are listed as follows: This method returns a copy of the node from an external document. This method takes two input parameters. The first parameter is the target node that needs to be copied. The second parameter is a Boolean flag and represents the way the target node is cloned. If the Boolean flag is false, the importNode method makes a shallow copy, and for a true value, it makes a deep copy. The following code shows the use of the importNode method to copy the content of a template containing an h1 element with some text: <!DOCTYPE html><html><head lang="en"><meta charset="UTF-8"><title>Web Component: Activating template using importNode method</title></head><body><div id="container"></div><template id="aTemplate"><h1>Template is activated using importNode method.</h1></template><script>var aTemplate = document.querySelector("#aTemplate"),container = document.getElementById("container"),templateContent = aTemplate.content,activeContent = document.importNode(templateContent,true);container.appendChild(activeContent);</script></body></html> In the preceding code, the template element has the aTemplate ID and is referenced using the querySelector method. The HTML markup content inside the template is then retrieved using the content property and saved in the templateContent variable. The importNode method is then used for deep cloning to get the activated node that is later appended to a div element. The following screenshot shows the output of the preceding code: To find out more about the importNode method, visit: http://mdn.io/importNode HTML Import The HTML Import is another important piece of technology of the W3C web component specification. It provides a way to include another HTML document present in a file with the current document. HTML Imports provide an alternate solution to the Iframe element, and are also great for resource bundling. The syntax of the HTML Imports is listed as follows: <link rel="import" href="fileName.html"> The details of the preceding syntax are listed here: The HTML file can be imported using the <link> tag and the rel attribute with import as the value. The href string points to the external HTML file that needs to be included in the current document. The HTML import element is implemented by the HTMLElementLink class. The IDL definition of HTML Import is listed in the following code: partial interface LinkImport {readonly attribute Document? import;};HTMLLinkElement implements LinkImport; The preceding code shows IDL for the HTML Import where the parent interface is LinkImport which has the readonly attribute import. The HTMLLinkElement class implements the LinkImport parent interface. The browser that supports HTML Import must implement the preceding IDL. HTML Import feature detection The HTML Import is new to the browser and may not be supported by all browsers. To check the support of the HTML Import in the browser, we need to check for the import property that is present inside a <link> element. The code to check the HTML import support is as follows: <!DOCTYPE html><html><head lang="en"><meta charset="UTF-8"><title>Web Component: HTML import support</title></head><body><h1 id="message"></h1><script>var isImportSupported = function () {var link = document.createElement("link");return 'import' in link;};var isSupported = isImportSupported(),message = document.getElementById("message");if (isSupported) {message.innerHTML = "Import is supported by the browser.";} else {message.innerHTML = "Import is not supported by thebrowser.";}</script></body></html> The preceding code has a isImportSupported function, which returns the Boolean value for HTML import support in the current browser. The function creates a <link> element and then checks the existence of an import attribute using the in operator. The following screenshot shows the output of the preceding code: The preceding screenshot shows that the import is supported by the current browser as the isImportSupported method returns true. The Can I Use tool can also be utilized for checking support for the HTML Import in the current browser. To check out the template support in the browser, use the following link: http://caniuse.com/#feat=imports The following screenshot shows the current status of support for the HTML Import in browsers using the Can I Use online tool: Accessing the HTML Import document The HTML Import includes the external document to the current page. We can access the external document content using the import property of the link element. In this section, we will learn how to use the import property to refer to the external document. The message.html file is an external HTML file document that needs to be imported. The content of the message.html file is as follows: <h1>This is from another HTML file document.</h1> The following code shows the HTML document where the message.html file is loaded and referenced by the import property: <!DOCTYPE html><html><head lang="en"><link rel="import" href="message.html"></head><body><script>(function(){var externalDocument =document.querySelector('link[rel="import"]').import;headerElement = externalDocument.querySelector('h1')document.body.appendChild(headerElement.cloneNode(true));})();</script></body></html> The details of the preceding code are listed here: In the header section, the <link> element is importing the HTML document present inside the message.html file. In the body section, an inline <script> element using the document.querySelector method is referencing the link elements having the rel attribute with the import value. Once the link element is located, the content of this external document is copied using the import property to the externalDocument variable. The header h1 element inside the external document is then located using a quesrySelector method and saved to the headerElement variable. The header element is then deep copied using the cloneNode method and appended to the body element of the current document. The following screenshot shows the output of the preceding code: HTML Import events The HTML <link> element with the import attribute supports two event handlers. These two events are listed "as follows: load: This event is fired when the external HTML file is imported successfully onto the current page. A JavaScript function can be attached to the onload attribute, which can be executed on a successful load of the external HTML file. error: This event is fired when the external HTML file is not loaded or found(HTTP code 404 not found). A JavaScript function can be attached to the onerror attribute, which can be executed on error of importing the external HTML file. The following code shows the use of these two event types while importing the message.html file to the current page: <!DOCTYPE html><html><head lang="en"><script async>function handleSuccess(e) {//import load Successfulvar targetLink = e.target,externalDocument = targetLink.import;headerElement = externalDocument.querySelector('h1'),clonedHeaderElement = headerElement.cloneNode(true);document.body.appendChild(clonedHeaderElement);}function handleError(e) {//Error in loadalert("error in import");}</script><link rel="import" href="message.html"onload="handleSuccess(event)"onerror="handleError(event)"></head><body></body></html> The details of the preceding code are listed here: handleSuccess: This method is attached to the onload attribute which is executed on the successful load of message.html in the current document. The handleSuccess method imports the document present inside the message.html file, then it finds the h1 element, and makes a deep copy of it . The cloned h1 element then gets appended to the body element. handleError: This method is attached to the onerror attribute of the <link> element. This method will be executed if the message.html file is not found. As the message.html file is imported successfully, the handleSuccess method gets executed and header element h1 is rendered in the browser. The following screenshot shows the output of the preceding code: Summary In this article, we learned about the web component specification. We also explored the building blocks of web components such as HTML Imports and templates. Resources for Article: Further resources on this subject: Learning D3.js Mapping [Article] Machine Learning [Article] Angular 2.0 [Article]
Read more
  • 0
  • 0
  • 2968
article-image-q-subscription-maintenance-ibm-infosphere
Packt
11 Nov 2010
12 min read
Save for later

Q Subscription Maintenance in IBM Infosphere

Packt
11 Nov 2010
12 min read
  IBM InfoSphere Replication Server and Data Event Publisher Design, implement, and monitor a successful Q replication and Event Publishing project Covers the toolsets needed to implement a successful Q replication project Aimed at the Linux, Unix, and Windows operating systems, with many concepts common to z/OS as well A chapter dedicated exclusively to WebSphere MQ for the DB2 DBA Detailed step-by-step instructions for 13 Q replication scenarios with troubleshooting and monitoring tips Written in a conversational and easy to follow manner Checking the state of a Q subscription The state of a Q subscription is recorded in the IBMQREP_SUBS table, and can be queried as follows: db2 "SELECT SUBSTR(subname,1,10) AS subname, state FROM asn.ibmqrep_subs" SUBNAME STATE -------- ----- DEPT0001 A XEMP0001 A Stopping a Q subscription The command to stop a Q subscription is STOP QSUB SUBNAME <qsubname>. Note that if Q Capture is not running, then the command will not take effect until Q Capture is started, because the STOP QSUB command generates an INSERT command into the IBMQREP_SIGNAL table: INSERT INTO ASN.IBMQREP_SIGNAL (signal_type, signal_subtype, signal_input_in) VALUES ('CMD', 'CAPSTOP', 'T10001'); In a unidirectional setup, to stop a Q subscription called T10001 where the Q Capture and Q Apply control tables have a schema of ASN, create a text file called SYSA_qsub_stop_uni.asnclp containing the following ASNCLP commands: ASNCLP SESSION SET TO Q REPLICATION; SET RUN SCRIPT NOW STOP ON SQL ERROR ON; SET SERVER CAPTURE TO DB DB2A; SET SERVER TARGET TO DB DB2B; SET CAPTURE SCHEMA SOURCE ASN; SET APPLY SCHEMA ASN; stop qsub subname T10001; In bidirectional or peer-to-peer two-way replication, we have to specify both Q subscriptions (T10001 and T10002) for the subscription group: ASNCLP SESSION SET TO Q REPLICATION; SET RUN SCRIPT NOW STOP ON SQL ERROR ON; SET CAPTURE SCHEMA SOURCE ASN; SET APPLY SCHEMA ASN; SET SERVER CAPTURE TO DB DB2A; SET SERVER TARGET TO DB DB2B; stop qsub subname T10001; SET SERVER CAPTURE TO DB DB2B; SET SERVER TARGET TO DB DB2A; stop qsub subname T10002; In a Peer-to-peer four-way setup, the commands would be in a file called qsub_stop_p2p4w.asnclp containing the following ASNCLP commands: ASNCLP SESSION SET TO Q REPLICATION; SET RUN SCRIPT NOW STOP ON SQL ERROR ON; SET CAPTURE SCHEMA SOURCE ASN; SET APPLY SCHEMA ASN; SET SERVER CAPTURE TO DB DB2A; SET SERVER TARGET TO DB DB2B; stop qsub subname T10001; SET SERVER CAPTURE TO DB DB2A; SET SERVER TARGET TO DB DB2C; stop qsub subname T10002; SET SERVER CAPTURE TO DB DB2A; SET SERVER TARGET TO DB DB2D; stop qsub subname T10003; Dropping a Q subscription The ASNCLP command to drop a Q subscription is: DROP QSUB (SUBNAME <qsubname> USING REPLQMAP <repqmapname>); In a unidirectional setup, to drop a Q subscription called T10001, which uses a Replication Queue Map called RQMA2B, create a file called drop_qsub_uni.asnclp containing the following ASNCLP commands: ASNCLP SESSION SET TO Q REPLICATION; SET RUN SCRIPT NOW STOP ON SQL ERROR ON; SET SERVER CAPTURE TO DB DB2A; SET SERVER TARGET TO DB DB2B; SET CAPTURE SCHEMA SOURCE ASN; SET APPLY SCHEMA ASN; drop qsub (subname TAB1 using replqmap RQMA2B); We can use the SET DROP command to specify whether for unidirectional replication the target table and its table space are dropped when a Q subscription is deleted: SET DROP TARGET [NEVER|ALWAYS] The default is not to drop the target table. In a multi-directional setup, there are three methods we can use: In the first method, we need to issue the DROP QSUB command twice, once for the Q subscription from DB2A to DB2B and once for the Q subscription from DB2B to DB2A. In this method, we need to know the Q subscription and Replication Queue Map names, which is shown in the qsub_drop_bidi0.asnclp file: ASNCLP SESSION SET TO Q REPLICATION; SET RUN SCRIPT NOW STOP ON SQL ERROR ON; SET CAPTURE SCHEMA SOURCE ASN; SET APPLY SCHEMA ASN; SET SERVER CAPTURE TO DB DB2A; SET SERVER TARGET TO DB DB2B; drop qsub (subname T10001 using replqmap RQMA2B); SET SERVER CAPTURE TO DB DB2B; SET SERVER TARGET TO DB DB2A; drop qsub (subname T10002 using replqmap RQMB2A); In the second method, we use the DROP SUBTYPE command, which is used to delete the multi-directional Q subscriptions for a single logical table. We use the DROP SUBTYPE command with the SET REFERENCE TABLE construct, which identifies a Q subscription for multi-directional replication. An example of using these two is shown in the following content file, which drops all the Q subscriptions for the source table eric.t1. This content file needs to be called from a load script file. SET SUBGROUP "TABT1"; SET SERVER MULTIDIR TO DB "DB2A"; SET SERVER MULTIDIR TO DB "DB2B"; SET REFERENCE TABLE USING SCHEMA "DB2A".ASN USES TABLE eric.t1; DROP SUBTYPE B QSUBS; The USING SCHEMA part of the SET REFERENCE TABLE command identifies the server that contains the table (DB2A) and the schema (ASN) of the control tables in which this table is specified as a source and target. The USES TABLE part specifies the table schema (eric) and table name (t1) to which the Q subscription applies. When we use this command, no tables or table spaces are ever dropped. The SUBGROUP name must be the valid for the tables whose Q subscriptions we want to drop. We can find the SUBGROUP name for a table using the following query: db2 "SELECT SUBSTR(subgroup,1,10) AS subsgroup, SUBSTR(source_ owner,1,10) as schema, SUBSTR(source_name,1,10) as name FROM asn. ibmqrep_subs" SUBSGROUP SCHEMA NAME ------ ------- ------ TABT2 DB2ADMIN DEPT TABT2 DB2ADMIN XEMP The preceding ASNCLP command generates the following SQL: -- CONNECT TO DB2B USER XXXX using XXXX; DELETE FROM ASN.IBMQREP_TRG_COLS WHERE subname = 'T10001' AND recvq = 'CAPA.TO.APPB.RECVQ'; DELETE FROM ASN.IBMQREP_TARGETS WHERE subname = 'T10001' AND recvq = 'CAPA.TO.APPB.RECVQ'; DELETE FROM ASN.IBMQREP_SRC_COLS WHERE subname = 'T10002'; DELETE FROM ASN.IBMQREP_SUBS WHERE subname = 'T10002'; -- CONNECT TO DB2A USER XXXX using XXXX; DELETE FROM ASN.IBMQREP_SRC_COLS WHERE subname = 'T10001'; DELETE FROM ASN.IBMQREP_SUBS WHERE subname = 'T10001'; DELETE FROM ASN.IBMQREP_TRG_COLS WHERE subname = 'T10002' AND recvq = 'CAPB.TO.APPA.RECVQ'; DELETE FROM ASN.IBMQREP_TARGETS WHERE subname = 'T10002' AND recvq = 'CAPB.TO.APPA.RECVQ'; A third method uses the DROP SUBGROUP command, as shown: SET SUBGROUP "TABT2"; SET SERVER MULTIDIR TO DB "DB2A"; SET SERVER MULTIDIR TO DB "DB2B"; SET MULTIDIR SCHEMA "DB2A".ASN ; DROP SUBGROUP; With this command, we just need to specify the Q subscription group name (SUBGROUP). The preceding ASNCLP command generates the following SQL: -- CONNECT TO DB2A USER XXXX using XXXX; DELETE FROM ASN.IBMQREP_TRG_COLS WHERE subname = 'T10002' AND recvq = 'CAPB.TO.APPA.RECVQ'; DELETE FROM ASN.IBMQREP_TARGETS WHERE subname = 'T10002' AND recvq = 'CAPB.TO.APPA.RECVQ'; DELETE FROM ASN.IBMQREP_SRC_COLS WHERE subname = 'T10001'; DELETE FROM ASN.IBMQREP_SUBS WHERE subname = 'T10001'; -- CONNECT TO DB2B USER XXXX using XXXX; DELETE FROM ASN.IBMQREP_SRC_COLS WHERE subname = 'T10002'; DELETE FROM ASN.IBMQREP_SUBS WHERE subname = 'T10002'; DELETE FROM ASN.IBMQREP_TRG_COLS WHERE subname = 'T10001' AND recvq = 'CAPA.TO.APPB.RECVQ'; DELETE FROM ASN.IBMQREP_TARGETS WHERE subname = 'T10001' AND recvq = 'CAPA.TO.APPB.RECVQ'; In a peer-to-peer three-way scenario, we would add a third SET SERVER MULTIDIR TO DB line pointing to the third server. If we use the second or third method, then we do not need to know the Q subscription names, just the table name in the second method and the Q subscription group name in the third method. Altering a Q subscription We can only alter Q subscriptions which are inactive. The following query shows the state of all Q subscriptions: db2 "SELECT SUBSTR(subname,1,10) AS subname, state FROM asn.ibmqrep_subs" SUBNAME STATE ---------- ----- DEPT0001 I At the time of writing, if we try and alter an active Q subscription, we will get the following error when we run the ASNCLP commands: ErrorReport : ASN2003I The action "Alter Subscription" started at "Friday, 22 January 2010 12:53:16 o'clock GMT". Q subscription name: "DEPT0001". Q Capture server: "DB2A". Q Capture schema: "ASN". Q Apply server: "DB2B". Q Apply schema: "ASN". The source table is "DB2ADMIN.DEPT". The target table or stored procedure is "DB2ADMIN.DEPT". ASN0999E "The attribute "erroraction" cannot be updated." : "The Subscription cannot be updated because it is in active state" : Error condition "*", error code(s): "*", "*", "*". This should be resolved in a future release. So now let's move on and look at the command to alter a Q subscription. To alter a Q subscription, we use the ALTER QSUB ASNCLP command. The parameters for the command depend on whether we are running unidirectional or multi-directional replication. We can change attributes for both the source and target tables, but what we can change depends on the type of replication (unidirectional, bidirectional, or peer-to-peer), as shown in the following table: ParameterUniBiP2PSource table:ALL CHANGED ROWS [N | Y]YY HAS LOAD PHASE [N | I |E]YYYTarget table:CONFLICT RULE [K | C | A] Y CONFLICT ACTION [I | F | D | S | Q] Y ERROR ACTION [Q | D | S]YYYLOAD TYPE [0 | 2 | 3 | 4 | 104 | 5 | 105]YYYOKSQLSTATES ["sqlstates"]YYY For unidirectional replication, the format of the command is: ALTER QSUB <subname> REPLQMAP <mapname> USING REPLQMAP <mapname> DESC <description> MANAGE TARGET CCD [CREATE SQL REGISTRATION|DROP SQL REGISTRATION|ALTER SQL REGISTRATION FOR Q REPLICATION] USING OPTIONS [other-opt-clause|add-cols-clause] other-opt-clause: SEARCH CONDITION "<search_condition>" ALL CHANGED ROWS [N|Y] HAS LOAD PHASE-- [N|I|E] SUPPRESS DELETES [N|Y] CONFLICT ACTION [I|F|D|S|Q] ERROR ACTION [S|D|Q] OKSQLSTATES "<sqlstates>" LOAD TYPE [0|1|2|3|4|104|5|105] add-cols-clause: ADD COLS (<trgcolname1> <srccolname1>,<trgcolname2> <srccolname2>) An example of altering a Q subscription to add a search condition is: ASNCLP SESSION SET TO Q REPLICATION; SET RUN SCRIPT NOW STOP ON SQL ERROR ON; SET CAPTURE SCHEMA SOURCE ASN; SET APPLY SCHEMA ASN; SET SERVER CAPTURE TO DB DB2A; SET SERVER TARGET TO DB DB2B; ALTER QSUB tab1 REPLQMAP rqma2b USING OPTIONS SEARCH CONDITION "WHERE :c1 > 1000" ; In multi-directional replication, the format of the command is: ALTER QSUB SUBTYPE B FROM NODE <svn.schema> SOURCE [src-clause] TARGET [trg-clause] FROM NODE <svn.schema> SOURCE [src-clause] TARGET [trg-clause] src-clause: ALL CHANGED ROWS [N/Y] HAS LOAD PHASE [N/I/E] trg-clause: CONFLICT RULE [K/C/A] +-' '-CONFLICT ACTION [I/F/D/S/Q] ERROR ACTION [Q/D/S] LOAD TYPE [0/2/3] OKSQLSTATES <"sqlstates"> If we are altering a Q subscription in a multi-directional environment, then we can use the SET REFERENCE TABLE construct. We need to specify the SUBTYPE parameter as follows: Bidirectional replication: ALTER QSUB SUBTYPE B Peer-to-peer replication: ALTER QSUB SUBTYPE P Let's look at a bidirectional replication example, where we want to change the ERROR ACTION to D for a Q subscription where the source table name is db2admin.dept. The content file (SYSA_cont_alter02.txt) will contain: SET SUBGROUP "TABT2"; SET SERVER MULTIDIR TO DB "DB2A"; SET SERVER MULTIDIR TO DB "DB2B"; SET REFERENCE TABLE USING SCHEMA "DB2A".ASN USES TABLE db2admin.dept; ALTER QSUB SUBTYPE B FROM NODE DB2A.ASN SOURCE TARGET ERROR ACTION D FROM NODE DB2B.ASN SOURCE TARGET ERROR ACTION D; We have to specify the SOURCE keyword even though we are only changing the target attributes. The ALTER QSUB statement spans the three last lines of the file. Starting a Q subscription An example of the ASNCLP command START QSUB to start a Q subscription can be found in the SYSA_qsub_start_db2ac.asnclp file. We just have to plug in the Q subscription name (T10002 in our example). ASNCLP SESSION SET TO Q REPLICATION; SET RUN SCRIPT NOW STOP ON SQL ERROR ON; SET CAPTURE SCHEMA SOURCE ASN; SET APPLY SCHEMA ASN; SET SERVER CAPTURE TO DB DB2A; SET SERVER TARGET TO DB DB2C; START QSUB SUBNAME T10002; Run the file as: asnclp -f SYSA_qsub_start_db2ac.asnclp We cannot put two START QSUB statements in the same file (as shown), even if they have their own section. So, we cannot code: ASNCLP SESSION SET TO Q REPLICATION; SET RUN SCRIPT NOW STOP ON SQL ERROR ON; SET SERVER CAPTURE TO DB DB2A; SET SERVER TARGET TO DB DB2D; SET CAPTURE SCHEMA SOURCE ASN; SET APPLY SCHEMA ASN; START QSUB SUBNAME T10003; SET SERVER CAPTURE TO DB DB2A; SET SERVER TARGET TO DB DB2C; START QSUB SUBNAME T10002; Sending a signal using ASNCLP For signals such as CAPSTART, CAPSTOP, and LOADDONE to be picked up, Q Capture needs to be running. Note that Q Capture does not have to be up for the signals to be issued, just picked up. As they are written to the DB2 log, Q Capture will see them when it reads the log and will action them in the order they were received. Summary In this article we took a look at how we can stop or drop or alter a Q subscription using ASNCLP commands and how we can issue a CAPSTART command. Further resources on this subject: Lotus Notes Domino 8: Upgrader's Guide [Book] Q Replication Components in IBM Replication Server [Article] IBM WebSphere MQ commands [Article] WebSphere MQ Sample Programs [Article] MQ Listener, Channel and Queue Management [Article]
Read more
  • 0
  • 0
  • 2962

article-image-radrails-views
Packt
21 Oct 2009
6 min read
Save for later

RadRails Views

Packt
21 Oct 2009
6 min read
Opening the RadRails Views Some of the views that we will go through in this article are available as part of the Rails default perspective, which means you don't need to do anything special to open them; they will appear as tabbed views in a pane at the bottom of your workbench. Just look for the tab name of the view you want to see and click on it to make it visible. However, there are some views that are not opened by default, or maybe you closed them at some point accidentally, or maybe you changed to the Debug perspective and you want to display some of the RadRails views there. When you need to open a view whose tab is not displaying, you can go to the Window menu, and select the Show View option. If you are in the Rails perspective, all the available views will be displayed in that menu, as you can see in the screenshot above. When opening this menu from a different perspective, you will not see the RadRails views here, but you can select Other.... If this is the case, in the Show View dialog, most of the views will appear under the Ruby category, except for the Generators, Rails API, and Rake Tasks views, which are located under Rails. Documentation Views As happens with any modern programming language, Ruby has an extensive API. There are lots of libraries and classes and even with Ruby being an intuitive language with a neat consistent API, often we need to read the documentation. As you probably know, Ruby provides a standard documentation format called RDoc, which uses the comments in the source code to generate documentation. We can access this RDoc documentation in different ways, mainly in HTML format through a browser or by using the command-line tool RI. This produces a plain-text output directly at the command shell, in a similar way to the man command in a UNIX system. RadRails doesn't add any new functionality to the built-in documentation, but provides some convenient views so we can explore it without losing the context of our project's source. Ruby Interactive (RI) View This view provides a fast and comfortable way of browsing the local documentation in the same way as you would use RI from the command line. You can look either for a class or a method name. Just start typing at the input box at the top left corner of the view and the list below will display the matching entries. That's a nice improvement over the command line interface, since you can see the results as you type instead of having to run a complete search every time. If you know the name of both the class and the method you are looking for, then you can write them using the hash (pound) sign as a separator. For example, to get the documentation for the sum method of the class Enumerable you would write Enumerable#sum. The documentation will display in the right pane, with a convenient highlighting of the referenced methods and classes. Even if the search results of RI don't look very attractive compared to the output of the HTML-based documentation views, RI has the advantage of searching locally on your computer, so you can use it even when working off-line. Ruby Core, Ruby Standard Library, and Rails API There are three more views related to documentation in RadRails: Ruby Core API, Ruby Standard Library API, and Rails API. Unlike the RI view, these ones look for the information over the Internet, so you will not be able to use them unless you are on-line. On the other hand, the information is displayed in a more attractive way than with RI, and it provides links to the source code of the consulted methods, so if the documentation is not enough, you can always take a look at the inner details of the implementation. The Ruby Core API view displays the documentation of the classes included in Ruby's core. These are the classes you can directly use without a previous require statement. The documentation rendered is that at http://www.ruby-doc.org/core/. You are probably familiar with this type of layout, since it's the default RDoc output. The upper pane displays the navigation links, and the lower pane shows the detail of the documentation. The navigation is divided into three frames. The one to the left shows the files in which the source code is, the one in the middle shows the Classes and Modules, and in the third one you can find all the methods in the API. The Ruby Standard Library API is composed of all the classes and modules that are not a part of Ruby's core, but are typically distributed as a part of the Ruby installation. You can directly use these classes after a require statement in your code. The Ruby Standard Library API View displays the information from http://www.ruby-doc.org/stdlib. In this case, the navigation is the same as in Ruby Core, but with an additional area to the left, in which you can see all the available packages (the ones you would require for using the classes within your code). When you select a package link, you will see the files, classes, and methods for that single package. The last of the documentation views displays information about the Rails API. It includes the documentation of ActiveRecord, the ActionPack, ActiveSupport, and the rest of the Rails components. The information is obtained from http://api.rubyonrails.org. In this case the layout is slightly different because the information about the files, classes, and methods is displayed to the left instead at the top of the view. Apart from that, the behavior is identical to that of the Ruby Core API view. Since some of the API descriptions are fairly long, it can be convenient to maximize the documentation views when you are using them. Remember you can maximize any of the views by double-clicking its tab or by using the maximize icon on the view's toolbar. Double-clicking again will restore the view to the original size and position.
Read more
  • 0
  • 0
  • 2953
Modal Close icon
Modal Close icon