Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - CMS and E-Commerce

830 Articles
article-image-oracle-enterprise-manager-grid-control
Packt
28 Jun 2010
9 min read
Save for later

Oracle Enterprise Manager Grid Control

Packt
28 Jun 2010
9 min read
Evolution of IT systems: As architectural patterns moved from monolithic systems to distributed systems, not all IT systems were moved to the newest patterns. Some new systems were built with new technologies and patterns whereas existing systems that were performing well enough continued on earlier technologies. Best of breed approach: With multi-tiered architectures, enterprises had the choice of building each tier using best of breed technology for that tier. For example, one system could be built using a J2EE container from vendor A, but a database from vendor B. Avoiding single vendors and technologies: Enterprises wanted to avoid dependence on any single vendor and technology. This led to systems being built using different technologies. For example, an order-booking system built using .NET technologies on Windows servers, but an order shipment system built using J2EE platform on Linux servers. Acquisitions and Mergers: Through acquisitions and mergers, enterprises have inherited IT systems that were built using different technologies. Frequently, new systems were added to integrate the systems of two enterprises but the new systems were totally different from the existing systems. For example, using BPEL process manager to integrate a CRM system with a transportation management system. We see that each factor for diversity in the data center has some business or strategic value. At the same time, such diversity makes management of the data center more complex. To manage such data centers we need a special product like Oracle's Enterprise Manager Grid Control that can provide a unified and centralized management solution for the wide array of products. In any given data center, there are lots of repetitive operations that need to be executed on multiple servers (like applying security patches on all Oracle Databases). As data centers move away from high-end servers to a grid of inexpensive servers, the number of IT resources increases in the data center and so does the cost of executing repetitive operations on the grid. Enterprise Manager Grid Control provides solutions to reduce the cost of any grid by automating repetitive operations that can be simultaneously executed on multiple servers. Enterprise Manager Grid Control works as a force multiplier by providing support for executing the same operations on multiple servers at the cost of one operation. As organizations put more emphasis on business and IT alignment, that requires a view of IT resources overlaid with business processes and applications is required. Enterprise Manager Grid Control provides such a view and improves the visibility of IT and business processes in a given data center. By using Enterprise Manager Grid Control, administrators can see IT issues in the context of business processes and they can understand how business processes are affected by IT performance. In this article, we will get to know more about Oracle's Enterprise Manager Grid Control by covering the following aspects: Key features of Enterprise Manager Grid Control: Comprehensive view of data center Unmanned monitoring Historical data analysis Configuration management Managing multiple entities as one Service level management Scheduling Automating provisioning Information publishing Synthetic transaction Manage from anywhere Enterprise Manager Product family Range of products managed by Enterprise Manager: Range of products EM extensibility Enterprise Manager Grid Control architecture. Multi-tier architecture Major components High availability Summary of learning Key features of Enterprise Manager Grid Control Typical applications in today's world are built with multi-tiered architecture; to manage such applications a system administrator has to navigate through multiple management tools and consoles that come along with each product. Some of the tools have a browser interface, some have a thick client interface, or even a command line interface. Navigating through multiple management tools often involves doing some actions from a browser or running some scripts or launching a thick client from the command line. For example, to find bottlenecks in a J2EE application in the production environment, an administrator has to navigate through the management console for the HTTP server, the management console for the J2EE container, and the management console for the database. Enterprise Manager Grid Control is a systems management product for the monitoring and management of all of the products in the data center. For the scenario explained above, Enterprise Manager provides a common management interface to manage an HTTP server, J2EE server and database. Enterprise Manager provides this unified solution for all products in a data center. In addition to basic monitoring, Enterprise Manager provides a unified interface for many other administration tasks like patching, configuration compliance, backup-recovery, and so on. Some key features of Enterprise Manager are explained here. Comprehensive view of the data center Enterprise Manager provides a comprehensive view of the data center, where an administrator can see all of the applications, servers, databases, network devices, storage devices, and so on, along with performance and configuration data. As the number of all such resources is very high, this Enterprise Manager highlights the resources that need immediate attention or that may need attention in near future. For example, a critical security patch is available that needs to be applied on some Fusion Middleware servers, or a server that has 90% CPU utilization. The following figure shows one such view of a data center, where users can see all entities that are monitored, that are up, that are down, that have performance alerts, that have configuration violations and so on. The user can drill down to fine-grained views from this top-level view. The data in the top-level view and the fine-grained drill-down view can be broadly summarized in the following categories: Performance data Data that shows how an IT resource is performing, that includes the current status, overall availability over a period of time, and other performance indicators that are specific to the resource like the average response time for a J2EE server. Any violation of acceptable performance thresholds is highlighted in this view. Configuration data Configuration data is the configuration parameters or, configuration files captured from an IT resource. Besides the current configuration, changes in configuration are also tracked and available from Enterprise Manager. Any violation of configuration conformance is also available. For example, if a data center policy mandates that only port 80 should be open on all servers, Enterprise Manager captures any violation of that policy. Status of scheduled operations In any data center there are some scheduled operations, these operations could be something like a system administration task such as taking a backup of a database server or some batch process that moves data across systems, for example, moving orders from fulfillment to shipping. Enterprise Manager provides a consolidated view of the status of all such scheduled operations. Inventory Enterprise Manager provides a listing of all hardware and software resources with details like version numbers. All of these resources are also categorized in different buckets – for example, Oracle Application Server, WebLogic application Server, WebSphere application are all categorized in the middleware bucket. This categorization helps the user to find resources of the same or similar type. Enterprise Manager. It also captures the finer details of software resources—like patches applied. The following figure shows one such view where the user can see all middleware entities like Oracle WebLogic Server, IBM WebSphere Server, Oracle Application Server, and so on. Oracle Enterprise Manager Grid Control" border="0" alt="Oracle Enterprise Manager Grid Control" title="Oracle Enterprise Manager Grid Control" /> Unmanned monitoring Enterprise Manager monitors IT resources around the clock and it gathers all performance indicators at every fixed interval. Whenever a performance indicator goes beyond the defined acceptable limit, Enterprise Manager records that occurrence. For example, if the acceptable limit of CPU utilization for a server is 70%, then whenever CPU utilization of the server goes above 70% then that occurrence is recorded. Enterprise Manager can also send notification of any such occurrence through common notification mechanisms like email, pager, SNMP trap, and so on. Historical data analysis All of the performance indicators captured by Enterprise Manager are saved in the repository. Enterprise Manager provides some useful views of the data using the system administrator that can analyze data over a period of time. Besides the fine-grained data that is collected at every fixed interval, it also provides coarse views by rolling up the data every hour and every 24 hours. Configuration management Enterprise Manager gathers configuration data for IT resources at regular intervals and checks for any configuration compliance violation. Any such violation is captured and can be sent out as a notification. Enterprise Manager comes with many out-of-the-box configuration compliance rules that represent best practices; in addition to that, system administrators can configure their own rules. All of the configuration data is also saved in the Enterprise Manager repository. Using data, the system administrator can compare the configuration of two similar IT resources or compare the configuration of the same IT resource at two different points in time. The system administrator can also see the configuration change history. Managing multiple entities as one Most of the more recent applications are built with multi-tiered architecture and each tier may run on different IT resources. For example, an order booking application can have all of its presentation and business logic running on a J2EE server, all business data persisted in a database, all authentication and authorization performed through an LDAP server, and all of the traffic to the application routed through an HTTP server. To monitor such applications, all of the underlying resources need to be monitored. Enterprise Manager provides support for grouping such related IT resources. Using this support, the system administrator can monitor all related resources as one entity and all performance indicators for all related entities can be monitored from one interface. Service level management Enterprise Manager provides necessary constructs and interfaces for managing service level agreements that are based on the performance of IT resources. Using these constructs, the user can define indicators to measure service levels and expected service levels. For example, a service representing a web application can have the same average JSP response time as a service indicator, the expected service level for this service is to have the service indicator below three seconds for 90% of the time during business hours. Enterprise Manager keeps track of all such indicators and violations in the context of a service and at any time the user can see the status of such service level agreements over a defined time period.
Read more
  • 0
  • 0
  • 2769

article-image-typo3-connecting-external-apis-flicker-and-youtube
Packt
01 Feb 2010
10 min read
Save for later

TYPO3 for Connecting External APIs: Flicker and Youtube

Packt
01 Feb 2010
10 min read
Getting recent Flickr photos The Flickr API is very powerful and gives access to just about everything a user can do manually. You can write scripts to automatically download latest pictures from a photostream, download photos or videos tagged with a certain keyword, or post comments on photos. In this recipe, we will make use of the phpFlickr library to perform some basic listing functions for photos in Flickr. Getting ready Before you start, you should sign up for a free Flickr account, or use an existing one. Once you have the account, you need to sign up for an API key. You can go to Your Account, and select the Extending Flickr tab. After filling in a short form, you should be given two keys—API key and secret key. We will use these in all Flickr operations. We will not go through the steps required for integration into extensions, and will leave this exercise to the reader. The code we present can be used in both frontend plugins and backend modules. As was previously mentioned, we will be using the phpFlickr library. Go to http://phpflickr.com/ to download the latest version of the library and read the complete documentation. How to do it... Include phpFlickr, and instantiate the object (modify the path to the library, and replace api-key with your key): require_once("phpFlickr.php");$flickrService = new phpFlickr('api-key'); Get a list of photos for a specific user: $photos = $flickrService->people_getPublicPhotos('7542705@N08'); If the operation succeeds, $photos will contain an array of 100 (by default) photos from the user. You could loop over the array, and print a thumbnail with a link to the full image by: foreach ($photos['photos']['photo'] as $photo) { $imgURL = $flickrService->buildPhotoURL($photo, 'thumbnail'); print '<a href="http://www.flickr.com/photos/' . $photo['owner'] . '/' . $photo['id'] . '">' . '<img src="' . $imgURL . '" /></a><br />';} How it works... The Flickr API is exposed as a set of REST services, which we can issue calls to. The tough work of signing the requests and parsing the results is encapsulated by phpFlickr, so we don't have to worry about it. Our job is to gather the parameters, issue the request, and process the response. In the example above, we got a list of public photos from a user 7542705@N08. You may not know the user ID of the person you want to get photos for, but Flickr API offers several methods for finding the ID: $userID = $flickrService->people_findByEmail($email);$userID = $flickrService->people_findByUsername($username); If you have the user ID, but want to get more information about the user, you can do it with the following calls: // Get more info about the user:$flickrService->people_getInfo($userID);// Find which public groups the user belongs to:$flickrService->people_getPublicGroups($userID);// Get user's public photos:$flickrService->people_getPublicPhotos($userID); We utilize the people_getPublicPhotos method to get the user's photostream. The returned array has the following structure: Array( [photos] => Array ( [page] => 1 [pages] => 8 [perpage] => 100 [total] => 770 [photo] => Array ( [0] => Array ( [id] => 3960430648 [owner] => 7542705@N08 [secret] => 9c4087aae3 [server] => 3423 [farm] => 4 [title] => One Cold Morning [ispublic] => 1 [isfriend] => 0 [isfamily] => 0 ) […] ) )) We loop over the $photos['photos']['photo'] array, and for each image, we build a URL for the thumbnail using the buildPhotoURL method, and a link to the image page on Flickr. There's more... There are lots of other things we can do, but we will only cover a few basic operations. Error reporting and debugging Occasionally, you might encounter an output you do not expect. It's possible that the Flickr API returned an error, but by default, it's not shown to the user. You need to call the following functions to get more information about the error: $errorCode = $flickrService->getErrorCode();$errorMessage = $flickrService->getErrorMsg(); Downloading a list of recent photos You can get a list of the most recent photos uploaded to Flickr using the following call: $recentPhotos = $flickrService->photos_getRecent(); See also Uploading files to Flickr Uploading DAM files to Flickr Uploading files to Flickr In this recipe, we will take a look at how to upload files to Flickr, as well as how to access other authenticated operations. Although many operations don't require authentication, any interactive functions do. Once you have successfully authenticated with Flickr, you can upload files, leave comments, and make other changes to the data stored in Flickr that you wouldn't be allowed to do without authentication. Getting ready If you followed the previous example, you should have everything ready to go. We'll assume you have the $flickrService object instantiated ready. How to do it... Before calling any operations that require elevated permissions, the service needs to be authenticated. Add the following code to perform the authentication: $frob = t3lib_div::_GET('frob');if (empty($frob)) { $flickrService->auth('write', false);} else { $flickrService->auth_getToken($frob);} Call the function to upload the file: $flickrService->sync_upload($filePath); Once the file is uploaded, it will appear in the user's photostream. How it works... Flickr applications can access any user's data if the user authorizes them. For security reasons, users are redirected to Yahoo! to log into their account, and confirm access for your application. Once your application is authorized by a user, a token is stored in Flickr, and can be retrieved at any other time. $flickrService->auth() requests permissions for the application. If the application is not yet authorized by the user, he/she will be redirected to Flickr. After giving the requested permissions, Flickr will redirect the user to the URL defined in the API key settings. The redirected URL will contain a parameter frob. If present, $flickrService->auth_getToken($frob); is executed to get the token and store it in session. Future calls within the session lifetime will not require further calls to Flickr. If the session is expired, the token will be requested from Flickr service, transparent to the end user. There's more... Successful authentication allows you to access other operations that you would not be able to access using regular authentication. Gaining permissions There are different levels of permissions that the service can request. You should not request more permissions than your application will use. API call Permission level $flickrService->auth('read', false); Permissions to read users' files, sets, collections, groups, and more. $flickrService->auth('write', false); Permissions to write (upload, create new, and so on). $flickrService->auth('delete', false); Permissions to delete files, groups, associations, and so on. Choosing between synchronous and asynchronous upload There are two functions that perform a file upload: $flickrService->sync_upload($filePath);$flickrService->async_upload($filePath); The first function continues execution only after the file has been accepted and processed by Flickr. The second function returns after the file has been submitted, but not necessarily processed. Why would you use the asynchronous method? Flickr service may have a large queue of uploaded files waiting to be processed, and your application might timeout while it's waiting. If you don't need to access the uploaded file right after it was uploaded, you should use the asynchronous method. See also Getting recent Flickr photos Uploading DAM files to Flickr Uploading DAM files to Flickr In this recipe, we will make use of our knowledge of the Flickr API and the phpFlickr interface to build a Flickr upload service into DAM. We will create a new action class, which will add our functionality into a DAM file list and context menus. Getting ready For simplicity, we will skip the process of creating the extension. You can download the extension dam_flickr_upload and view the source code. We will examine it in more detail in the How it works... section. How to do it... Sign up for Flickr, and request an API key if you haven't already done so. After you receive your key, click Edit key details. Fill in the application title and description as you see fit. Under the call back URL, enter the web path to the dam_flickr_upload/mod1/index.php file. For example, if your domain is http://domain.com/, TYPO3 is installed in the root of the domain, and you installed dam_flickr_upload in the default local location under typo3conf, then enter http://domain.com/typo3conf/ext/dam_flickr_upload/mod1/index.php You're likely to experience trouble with the callback URL if you're doing it on a local installation with no public URI. Install dam_flickr_upload. In the Extension Manager, under the extension settings, enter the Flickr API key and the secret key you have received. Go to the Media | File module, and click on the control button next to a file. Alternatively, select Send to Flickr in the context menu, which appears if you click on the file icon, as seen in the following screenshot: A new window will open, and redirect you to Flickr, asking you to authorize the application for accessing your account. Confirm the authorization by clicking the OK, I'LL AUTHORIZE IT button. The file will be uploaded, and placed into your photostream on Flickr. Subsequent uploads will no longer need explicit authorization. A window will come up, and disappear after the file has been successfully uploaded. How it works... Let's examine in detail how the extension works. First, examine the file tree. The root contains the now familiar ext_tables.php and ext_conf_template.txt files.The Res directory contains icons used in the DAM. The Lib directory contains the phpFlickr library. The Mod1 directory contains the module for uploading. ext_conf_template.txt This file contains the global extension configuration variables. The two variables defined in this file are the Flickr API key and the Flickr secret key. Both of these are required to upload files. ext_tables.php As was mentioned previously, ext_tables.php is a configuration file that is loaded when the TYPO3 framework is initializing. tx_dam::register_action ('tx_dam_action_flickrUpload', 'EXT:dam_flickr_upload/class.tx_dam_flickr_upload_action.php:&tx_dam_flickr_upload_action_flickrUpload'); This line registers a new action in DAM. Actions are provided by classes extending the tx_dam_actionbase class, and define operations that can be performed on files and directories. Examples of actions include view, cut, copy, rename, delete, and more. The second parameter of the function defines where the action class is located. $GLOBALS['TYPO3_CONF_VARS']['EXTCONF']['dam_flickr_upload']['allowedExtensions'] = array('avi', 'wmv', 'mov', 'mpg', 'mpeg','3gp', 'jpg', 'jpeg', 'tiff', 'gif', 'png'); We define an array of file types that can be uploaded to Flickr. This is not hardcoded in the extension, but stored in ext_tables.php, so that it can be overwritten by extensions wanting to limit or expand the functionality to other file types.
Read more
  • 0
  • 0
  • 2760

article-image-graphic-design-working-clip-art-and-making-your-own
Packt
09 Nov 2012
9 min read
Save for later

Graphic Design - Working with Clip Art and Making Your Own

Packt
09 Nov 2012
9 min read
(For more resources related to this topic, see here.) Making symbols from Character Palette into clip art—where to find clip art for iWork Clip art is the collective name for predrawn images, pictures, and symbols that can be quickly added to documents. In standalone products, a separate Clip art folder is often added to the package. iWork doesn't have one — this has been the subject of numerous complaints on the Internet forums. However, even though there is no clip art folder as such in iWork, there are hundreds of clip-art images on Mac computers that come as part of our computers. Unlike MS Office or Open Office that are separate Universes on your machine, iWork (even though we buy it separately) is an integral part of the Mac. It complements and works with applications that are already there, such as iLife (iPhoto), Mail, Preview, Address Book, Dictionaries, and Spotlight. Getting ready So, where is the clip art for iWork? First, elements of the Pages templates can be used as clip art—just copy and paste them. Look at this wrought iron fence post from the Collector Newsletter template. It is used there as a column divider. Select and copy-paste it into your project, set the image placement to Floating, and move it in between the columns or text boxes. The Collector Newsletter template also has a paper clip, a price tag, and several images of slightly rumpled and yellowed sheets of paper that can be used as backgrounds. Images with little grey houses and house keys from the Real Estate Newsletter template are good to use with any project related to property. The index card image from the Back Page of the Green Grocery Newsletter template can be used for designing a cooking recipe, and the background image of a yellowing piece of paper from the Musical Concert poster would make a good background for an article on history. Clip art in many templates is editable and easy to resize or modify. Some of the images are locked or grouped. Under the Arrange menu, select the Unlock and Ungroup options, to use those images as separate graphic elements. Many of the clip-art images are easy to recreate with iWork tools. Bear in mind, however, that some of the images have low resolution and should only be used with small dimensions. You will find various clip-art images in the following locations: A dozen or so attractive clip-art images are in Image Bullets, under the Bullets drop-down menu: Open the Text Inspector, click on the List tab, and choose Image Bullets from the Bullets & Numbering drop-down menu. There, you will find checkboxes and other images. Silver and gold pearls look very attractive, but any of your own original images can also be made into bullets. In Bullets, choose Custom Image | Choose and import your own image. Note that images with shadows may distort the surrounding text. Use them with care or avoid applying shadows. Navigate to Macintosh HD | Library | Desktop Pictures: Double-click on the hard disk icon on your desktop and go to Library | Desktop Pictures. There are several dozen images including the dew drop and the lady bug. These are large files, good enough for using as background images. They are not, strictly speaking, clip art but are worth keeping in mind. Navigate to Home| Pictures | iChat Icons (or HD | Library | Application Support | Apple | iChat Icons): The Home folder icon (a little house) is available in the side panel of any folder on your Mac. This is where documents associated with your account are stored on your computer. It has a Pictures folder with a dozen very small images sitting in the folder called iChat Icons. National flags are stored here as button-like images. The apple image can be found in the Fruit folder. The gems icons, such as the ruby heart, from this folder look attractive as bullets. Navigate to HD | Library | User Pictures: You can find animals, flowers, nature, sports, and other clip-art images in this folder. These are small TIFF files that can be used as icons when a personal account is set up on a Mac. But of course, they can be used as clip art. The Sports folder has a selection of balls, but not a cricket ball, even though cricket may have the biggest following in the world (Britain, South Africa, Australia, India, Pakistan, Bangladesh, Sri Lanka, and many Caribbean countries). But, a free image of the cricket ball from Wikipedia/Wikimedia can easily be made into clip art. There may be several Libraries on your Mac. The main Library is on your hard drive; don't move or rename any folders here. Duplicate images from this folder and use copies. Your personal library (it is created for each account on your machine) is in the Home folder. This may sound a bit confusing, but you don't have to wade through endless folders to find what you want, just use Spotlight to find relevant images on your computer, in the same way that you would use Google to search on the Internet. Character Palette has hundreds of very useful clip-art-like characters and symbols. You can find the Character Palette via Edit | Special Characters. Alternatively, open System Preferences | International | Input Menu. Check the Character Palette and Show input menu in menu bar boxes: Now, you will be able to open the Character Palette from the screen-top menu. Character Palette can also be accessed through Font Panel. Open it with the Command + T keyboard shortcut. Click on the action wheel at the bottom of the panel and choose Characters... to open the Character Palette. Check the Character Palette box to find what you need. Images here range from the familiar Command symbol on the Mac keyboard to zodiac symbols, to chess pieces and cards icons, to mathematical and musical signs, various daily life shapes, including icons of telephones, pens and pencils, scissors, airplanes, and so on. And there are Greek letters that can be used in scientific papers (for instance, the letter ∏). To import the Character Palette symbols into an iWork document, just click-and-drag them into your project. The beauty of the Character Palette characters is that they behave like letters. You can change the color and font size in the Format bar and add shadows and other effects in the Graphics Inspector or via the Font Panel. To use the Character Palette characters as clip art, we need to turn them into images in PDF, JPEG, or some other format. How to do it... Let's see how a character can be turned into a piece of clip art. This applies to both letters and symbols from the Character Palette. Open Character Palette | Symbols | Miscellaneous Symbols. In this folder, we have a selection of scissors that can be used to show, with a dotted line, where to cut out coupons or forms from brochures, flyers, posters, and other marketing material. Click on the scissors symbol with a snapped off blade and drag it into an iWork document. Select the symbol in the same way as you would select a letter, and enlarge it substantially. To enlarge, click on the Font Size drop-down menu in the Format bar and select a bigger size, or use the shortcut key Command + plus sign (hit the plus key several times). Next, turn the scissors into an image. Make a screenshot (Command + Shift + 4) or use the Print dialog to make a PDF or a JPEG. You can crop the image in iPhoto or Preview before using it in iWork, or you can import it straight into your iWork project and remove the white background with the Alpha tool. If Alpha is not in your toolbar, you can find it under Format |Instant Alpha. Move the scissors onto the dotted line of your coupon. Now, the blade that is snapped in half appears to be cutting through the dotted line. Remember that you can rotate the clip art image to put scissors either on the horizontal or on the vertical sides of the coupon. Use other scissors symbols from the Character Palette, if they are more suitable for your project. Store the "scissors" clip art in iPhoto or another folder for future use if you are likely to need it again. There's more... There are other easily accessible sources of clip art. MS Office clip art is compatible If you have kept your old copy of MS Office, nothing is simpler than copy-pasting or draggingand- dropping clip art from the Office folder right into your iWork project. When using clip art, it's worth remembering that some predrawn images quickly become dated. For example, if you put a clip art image of an incandescent lamp in your marketing documents for electric works, it may give an impression that you are not familiar with more modern and economic lighting technologies. Likewise, a clip art image of an old-fashioned computer with a CRT display put on your promotional literature for computer services can send the wrong message, because modern machines use flat-screen displays. Wikipedia/Wikimedia Look on Wikipedia for free generic images. Search for articles about tools, domestic appliances, furniture, houses, and various other objects. Most articles have downloadable images with no copyright restrictions for re-use. They can easily be made into clip art. This image of a hammer from Wikipedia can be used for any articles about DIY (do-it-yourself) projects. Create your own clip art Above all, it is fun to create your own clip art in iWork. For example, take a few snapshots with your digital camera or cell phone, put them in one of iWork's shapes, and get an original piece of clip art. It could be a nice way to involve children in your project.
Read more
  • 0
  • 0
  • 2759

article-image-user-interface-production
Packt
01 Dec 2010
10 min read
Save for later

User Interface in Production

Packt
01 Dec 2010
10 min read
  Liferay User Interface Development Develop a powerful and rich user interface with Liferay Portal 6.0 Design usable and great-looking user interfaces for Liferay portals Get familiar with major theme development tools to help you create a striking new look for your Liferay portal Learn the techniques and tools to help you improve the look and feel of any Liferay portal A practical guide with lots of sample code included from real Liferay Portal Projects free for use for developing your own projects         Read more about this book       (For more resources on this subject, see here.) This section will introduce how to add workflow capabilities to any custom assets in plugins. Knowledge Base articles will be used as an example of custom assets. In brief, the Knowledge Base plugin enables companies to consolidate and manage the most accurate information in one place. For example, a user manual is always updated; users are able to rate, to add workflow and to provide feedback on these Knowledge Base articles. Preparing a plugin—Knowledge Base First of all, let's prepare a plugin with workflow capabilities, called Knowledge Base. Note that the plugin Knowledge Base here is used as an example only. You can have your own plugin as well. What's Knowledge Base? The plugin Knowledge Base allows authoring articles and organize them in a hierarchy of navigable categories. It leverages Web Content articles, structures, and templates; allows rating on articles; allows commenting on articles; allows adding hierarchy of categories; allows adding tags to articles; exports articles to PDF and other formats; supports workflow; allows adding custom attributes (called custom fields); supports indexing and advanced search; allows using the rule engine and so on. Most importantly the plugin Knowledge Base supports import of a semantic markup language for technical documentation called DocBook. DocBook enables its users to create document content in a presentation-neutral form that captures the logical structure of the content; that content can then be published in a variety of formats, such as HTML, XHTML, EPUB, and PDF, without requiring users to make any changes to the source. Refer to http://www.docbook.org/. In general, the plugin Knowledge Base provides four portlets inside: Knowledge Base Admin (managing knowledge base articles and templates), Knowledge Base Aggregator (publishing knowledge base articles), Knowledge Base Display (displaying knowledge base articles) and Knowledge Base Search (the ability to search knowledge base articles). Structure The plugin Knowledge Base has following folder structure under the folder $PLUGIN_SDK_HOME/knowledge-base-portlet. admin: View JSP files for portlet Admin aggregator: View JSP files for portlet Aggregator display: View JSP files for portlet Display icons: Icon images files js: JavaScript files META-INF: Contains context.xml search: View JSP files for portlet Search WEB-INF: Web info specification; includes subfolders classes, client, lib, service, SQL, src, and tld As you can see, JSP files such as init.jsp and css_init.jsp are located in the folder $PLUGIN_SDK_HOME/knowledge-base-portlet. Services and models As you may have noticed, the plugin Knowledge Base has specified services and models with the package named com.liferay.knowledgebase. You would be able to find details at $PLUGIN_SDK_HOME/knowledge-base-portlet/docroot/WEB-INF/service.xml. Service-Builder in Plugins SDK will automatically generate services and models against service.xml, plus XML files such as portlet-hbm.xml, portlet-model-hints.xml, portlet-orm.xml, portlet-spring.xml, base-spring.xml, cluster-spring.xml,dynamic-data-source-spring.xml, hibernate-spring.xml, infrastructure-spring.xml, messaging-spring.xml, and shard-data-source-spring.xml under the folder $PLUGIN_SDK_HOME/knowledge-base-portlet/docroot/WEB-INF/src/META-INF. The service.xml specified Knowledge Base articles as entries: Article, Comment, and Template. The entry Article included columns: article Id as primary key, resource Prim Key, group Id, company Id, user Id, user Name, create Date, modified Date, parent resource Prim Key, version, title, content, description, and priority; the entry Template included columns: template Id as primary key, group Id, company Id, user Id, user Name, create Date, modified Date, title, content, and description; while the entity Comment included columns: comment Id as primary key, group Id, company Id, user Id, user Name, create Date, modified Date, class Name Id, class PK, content and helpful. As you can see, the entity Comment could be applied on either any core assets or custom assets like Article and Template by using class Name Id and class PK. By the way, the custom SQL scripts were provided at $PLUGIN_SDK_HOME/knowledge-base-portlet/docroot/WEB-INF/src/custom-sql/default.xml. In addition, resource actions, that is, permission actions specification—are provided at $PLUGIN_SDK_HOME/knowledge-base-portlet/docroot/WEB-INF/src/resource-actions/default.xml. Of course, you can use Ant target build-wsdd to generate WSDD server configuration file $PLUGIN_SDK_HOME/knowledge-base-portlet/docroot/WEB-INF/server-config.wsdd and to use Ant target build-client plus namespace-mapping.properties to generate web service client JAR file like $PLUGIN_SDK_HOME/knowledge-base-portlet/docroot/WEB-INF/client/known-base-portlet-client.jar. In brief, based on your own custom models and services specified in service.xml, you can easily build service, WSDD, and web service client in plugins of Liferay 6 or above version. Adding workflow instance link First, you have to add a workflow instance link and its related columns and finder in $PLUGIN_SDK_HOME/knowledge-base-portlet/docroot/WEB-INF/service.xml as follows. <column name="status" type="int" /> <column name="statusByUserId" type="long" /> <column name="statusByUserName" type="String" /> <column name="statusDate" type="Date" /> <!-- ignore details --> <finder name="R_S" return-type="Collection"> <finder-column name="resourcePrimKey" /> <finder-column name="status" /> </finder> <!-- ignore details --> <reference package-path="com.liferay.portal" entity="WorkflowInstance Link" /> As shown in the above code, the column element represents a column in the database, four columns like status, statusByUserId, statusByUserName, and statusDate are required for Knowledge Base workflow, storing workflow related status, and user info; the finder element represents a generated finder method, the method finder R_S is defined as Collection (an option) for return type with two columns, for example, resourcePrimkey and status; where the reference element allows you to inject services from another service.xml within the same class loader. For example, if you inject the Resource (that is, WorkflowInstanceLink) entity, then you'll be able to reference the Resource services from your service implementation via the methods getResourceLocalService and getResourceService. You'll also be able to reference the Resource services via the variables resourceLocalService and resourceService. Then, you need to run ANT target build-service to rebuild service based on newly added workflow instance link. Adding workflow handler Liferay 6 provides pluggable workflow implementations, where developers can register their own workflow handler implementation for any entity they build. It will appear automatically in the workflow admin portlet so users can associate workflow entities with available permissions. To make it happen, we need to add the Workflow Handler in $PLUGIN_SDK_HOME/knowledge-base-portlet/docroot/WEB-INF/liferay-portlet.xml of plugin as follows. <workflow-handler>com.liferay.knowledgebase.admin.workflow. ArticleWorkflowHandler</workflow-handler> As shown in the above code, the workflow-handler value must be a class that implements com.liferay.portal.kernel.workflow.BaseWorkflowHandler and is called when the workflow is run. Of course, you need to specify ArticleWorkflowHandler under the package com.liferay.knowledgebase.admin.workflow. The following is an example code snippet. public class ArticleWorkflowHandler extends BaseWorkflowHandler { public String getClassName(){/* get target class name */}; public String getType(Locale locale) {/* get target entity type, that is, Knowledge base article*/}; public Article updateStatus( int status, Map<String, Serializable> workflowContext) throws PortalException, SystemException {/* update workflow status*/}; protected String getIconPath(ThemeDisplay themeDisplay) {/* find icon path */ return ArticleLocalServiceUtil.updateStatus(userId, resourcePrimKey, status, serviceContext); }; } As you can see, ArticleWorkflowHandler extends BaseWorkflowHandler and overrode the methods getClassName, getType, updateStatus, and getIconPath. That's it. Updating workflow status As mentioned in the previous section, you added the method updateStatus in ArticleWorkflowHandler. Now you should provide implementation of the method updateStatus in com.liferay.knowledgebase.service.impl.ArticleLocalServiceImpl.java. The following is some example sample code: public Article updateStatus(long userId, long resourcePrimKey, int status, ServiceContext serviceContext) throws PortalException, SystemException { /* ignore details */ // Article Article article = getLatestArticle(resourcePrimKey, WorkflowConstants.STATUS_ANY); articlePersistence.update(article, false); if (status != WorkflowConstants.STATUS_APPROVED) { return article; } // Articles // Asset // Social // Indexer // Attachments // Subscriptions } As shown in above code, it first gets latest article by resourcePrimKey and WorkflowConstants.STATUS_ANY. Then, it updates the article based on the workflow status. And moreover, it updates articles display order, asset tags and categories, social activities, indexer, attachments, subscriptions, and so on. After adding new method at com.liferay.knowledgebase.service.impl.ArticleLocalServiceImpl.java, you need to run ANT target build-service to build services. Adding workflow-related UI tags Now it is time to add workflow related UI tags (AUI tags are used as an example) at $PLUGIN_SDK_HOME/knowledge-base-portlet/docroot/admin/edit_article.jsp. First of all, add the AUI input workflow action with value WorkflowConstants.ACTION_SAVE_DRAFT as follows. <aui:input name="workflowAction" type="hidden" value=" <%= WorkflowConstants.ACTION_SAVE_DRAFT %>" /> As shown in above code, the default value of AUI input workflowAction was set as SAVE DRAFT with type hidden. That is, this AUI input is invisible to end users. Afterwards, it would be better to add workflow messages by UI tag liferay-ui:message, like a-new-version-will-be-created-automatically-if-this-content-is-modified for WorkflowConstants.STATUS_APPROVED, and there-is-a-publication-workflow-in-process for WorkflowConstants.STATUS_PENDING. <% int status = BeanParamUtil.getInteger(article, request, "status", WorkflowConstants.STATUS_DRAFT); %> <c:choose> <c:when test="<%= status == WorkflowConstants.STATUS_APPROVED %>"> <div class="portlet-msg-info"> <liferay-ui:message key="a-new-version-will-be-created- automatically-if-this-content-is-modified" /> </div> </c:when> <c:when test="<%= status == WorkflowConstants.STATUS_PENDING %>"> <div class="portlet-msg-info"> <liferay-ui:message key="there-is-a-publication-workflow-in-process" /> </div></c:when> </c:choose> And then add AUI workflow status tag aui:workflow-status at $PLUGIN_SDK_HOME/knowledge-base-portlet/docroot/admin/edit_article.jsp. <c:if test="<%= article != null %>"> <aui:workflow-status id="<%= String.valueOf(resourcePrimKey) %>" status="<%= status %>" version="<%= GetterUtil.getDouble(String. valueOf(version)) %>" /> </c:if> As you can see, aui:workflow-status is used to represent workflow status. Similarly you can find other AUI tags like a button, button_row, column, field_wrapper, fieldset, form, input, layout, legend, option, panel, script, and select. Finally you should add the JavaScript to implement the function publishArticle() as follows. function <portlet:namespace />publishArticle() { document.<portlet:namespace />fm.<portlet:namespace />workflowAction.value = "<%= WorkflowConstants.ACTION_PUBLISH %>"; <portlet:namespace />updateArticle(); } As you can see, the workflow action value is set as WorkflowConstants.ACTION_PUBLISH. You have added workflow capabilities on Knowledge Base articles in plugins. From now on, you will be able to apply workflow on Knowledge Base articles through Control Panel.
Read more
  • 0
  • 0
  • 2758

Packt
20 Jan 2010
8 min read
Save for later

SOA Management—OSB (aka ALSB) Management

Packt
20 Jan 2010
8 min read
Introducing Oracle Service Bus (OSB) In any distributed system, components that run on a distributed platform need to communicate or exchange messages with each other. In SOA based systems, services need to interact or exchange messages with other services also. Oracle Service Bus is a product that provides a platform for interaction and message exchange between services. Integration of disparate services can be a challenge. There could be a difference in messaging models—some services may support the synchronous model, whereas other services may support the asynchronous model. Some services may support HTTP protocol, whereas other services may support JMS protocol. Oracle Service Bus provides helps in solving many of these challenges by providing the following features: Support for different protocols, such as HTTP(s), FTP, JMS, E-mail, Tuxedo, and so on Support for different messaging models, such as point-to-point model, publish-subscribe model Support for different message formats, such as SOAP, E-mail, JMS, XML, and so on Support for different content types such as XML, binary, and so on Data transformation using XLST and Xquery—data mapping from one format to another format through declarative constructs Besides the integration support, OSB provides features that help in managing runtime for the integration of services. Some features are: Load balancing: Load balancing is very important when traffic volume between services is very high. Load balancing also helps to achieve high availability. Content-based routing: Routing to appropriate service based on content is very valuable in the changing business environment. OSB provides loosely-coupled bus architecture, where all services and consumers of services can be plugged into, and OSB becomes a central place for defining mediation rules between services and consumers, as shown next: At implementation level, OSB is a set of J2EE applications that run on top of WebLogic J2EE container. It uses various J2EE constructs like Enterprise Java Bean, data source, connectors, and so on. OSB uses a database for storing the reporting data. OSB can be deployed in a single server model or clustered server model. Generally, in a production environment a clustered model is used, where multiple WebLogic Servers are used for load balancing and high-availability. In such setups, multiple WebLogic Servers are front-ended by a load balancer. The following figure depicts one such clustered deployment. OSB constructs There are some constructs specific to OSB—let's learn about those constructs. Proxy service OSB provides mediation between consumer and provider services. This mediation is loosely coupled where a consumer service makes a call to intermediate service, and the intermediate service performs some transformation and routes it to producer service. This intermediate service is hosted by OSB and called Proxy service. Business service Business service is a representation of actual producer service. Business service controls the call to business service, it knows about the business service endpoint. In case of failure in calling producer service, it can retry the operation. In case there are multiple producer services, it knows about the endpoint of each producer service and provides load balancing across those endpoints. Message flow Message flow is a set of steps that are executed by the proxy service before it routes to the business service. It includes steps for data transformation, validation, reporting, and so on. Supported versions Enterprise Manager provides OSB management from release 10.2.0.5 onwards. The following is the support matrix for EM against different versions of OSB management: EM Version OSB 2.6 OSB 2.6.1 OSB 3.0 OSB 10gR3 10.2.0.5 Yes Yes Yes Yes To manage or monitor OSB from Enterprise Manager 10.2.0.5, some patches are needed on OSB servers. The following table lists the patch number for each supported OSB release: OSB version Patch ID 2.6, 2.6.1 B8SZ 3.0 SWGQ 10gR3 7NPS The SmartUpdate patching tool that comes with WebLogic installation can be used to apply these patches. Before downloading or applying these patches, check Enterprise Manager documentation for any updates on patch Ids. Discovery of Oracle Service Bus Oracle Service Bus gets discovered as part of WebLogic domain, managed server discovery.We know how Enterprise Manager Agent discovers WebLogic domain and managed servers. Along with domain, cluster and managed server targets, OSB targets are also created and persisted in the Enterprise Manager repository. Just like WebLogic domain and server targets, OSB can also be discovered and monitored, either by remote agent or local agent. In case you want to use OSB service's provisioning feature you will need to discover/monitor OSB in the local agent mode. In the local agent mode, the agent is installed where the domain admin server is running. In the remote agent mode, the agent can be on any other host on the same network. After discovery you will see the Oracle Service Bus target on the Middleware tab of the EM homepage. The following screenshot shows the OSB target on the Middleware tab. Monitoring OSB and OSB services Under the BPEL monitoring section, we learnt about the BPEL eco system that included BPEL PM, dehydration store, database listener, host, and so on, and how Enterprise Manager provides support for modeling of BPEL eco system as system. Enterprise Manager provides similar support for managing and monitoring OSB eco system. Monitoring OSB For monitoring OSB, you can go to the OSB homepage, where you can see the status and availability of OSB. It shows some other details like the host, name of the WebLogic Server Domain where OSB is installed. It also shows some coarse-grained historical view of traffic metrics—where it shows message/error/security violation rates for all the services. On the Home page there is an option to create an infrastructure system and service. The steps for creating an infrastructure system service is exactly the same as BPEL, so we will not repeat those steps and will leave it as an exercise. The following screenshot shows one such homepage after the infrastructure system services are created. OSB uses Java Message Service (JMS) queues for receiving, scheduling, dispatching of messages, it's very important to monitor JMS queues for OSB. From the OSB homepage, click on JMS Performance, and you will see the performance of JMS queues used by OSB. The following screenshot shows one such page. You can see the metrics for message inflow and messages pending. Monitoring OSB services OSB service monitoring can be divided into two parts—proxy service that receives the requests and business service that dispatches the requests. Proxy service metrics also include the metrics in message flow, where message flow processes the request. Monitoring proxy services Performance of proxy services represents the performance seen by consumers of OSB. Besides that, proxy services are hosted by OSB servers. Enterprise Manager collects some very useful metrics for proxy services. These metrics are: Status of proxy service Throughput metrics at proxy service and endpoint level Performance metrics at proxy service level and endpoint level Error metrics at service level and end point level Security violation at proxy service level To monitor the performance of message flow, there are some fine-grained metrics available, these metrics include throughput, error rate, and performance at each step in the message flow. Let's go through the console screens for OSB service monitoring. Go to the OSB Services tab from OSB homepage, on that page you will see a listing of all the services that include proxy as well as business services. You will see every proxy and business services is part of some project. OSB provides a construct project under which different services can be created; project is just a means to categorize different services. The following screenshot shows such a page where you can see the list of all the projects and services. For each service you see metrics related to throughput, errors, violations, and performance. Once you click on one of the services, you will see a page where the metrics and other details of proxy service are listed. On this page, you can see the throughput and performance chart for the proxy service. You will also see a section for the EM service model that represents a proxy service. This model is similar to the model that we had for the BPEL process. This mode has three entities: Infrastructure service: This service represents all of the components in the OSB eco system. Availability service: This service represents all of the availability tests for an OSB proxy service endpoint, it could include SOAP tests, Web transactions etc. Aggregate service: This service is just an aggregate of the previous two services. Using this service, you can monitor all of the required metrics in one place. You can see the performance of your IT infrastructure as well as the performance of partner links in one place.
Read more
  • 0
  • 0
  • 2743

article-image-joomla-16-managing-site-users-access-control
Packt
17 Mar 2011
9 min read
Save for later

Joomla! 1.6: Managing Site Users with Access Control

Packt
17 Mar 2011
9 min read
Joomla! 1.6 First Look A concise guide to everything that's new in Joomla! 1.6.     What's new about the Access Control Levels system? In Joomla! 1.5, a fixed set of user groups was available, ranging from "Public" users (anyone with access to the frontend of the site) to "Super Administrators", allowed to log in to the backend and do anything. The ACL system in Joomla! 1.6 is much more flexible: Instead of fixed user groups with fixed sets of permissions, you can create as many groups as you want and grant the people in those groups any combination of permissions. ACL enables you to control anything users can do on the site: log in, create, edit, delete, publish, unpublish, trash, archive, manage, or administer things. Users are no longer limited to only one group: a user can belong to different groups at the same time. This allows you to give particular users both the set of permissions for one group and another group without having to create a third, combined set of permissions from the ground up. Permissions no longer apply to the whole site as they did in Joomla! 1.5. You can now set permissions for specific parts of the site. Permissions apply to either the whole site, or to specific components, categories, or items (such as a single article). What are the default user groups and permissions? The flexibility of the new ACL system has a downside: it can also get quite complex. The power to create as many user groups as you like, each with very fine-grained sets of permissions assigned to them, means you can easily get entangled in a web of user groups, Joomla! Objects, and permissions. You should carefully plan the combinations of permissions you need to assign to different user groups. Before you change anything in Joomla!, sketch an outline or use mind mapping tools (such as http://bubbl.us) to get an overview of what you want to accomplish through Joomla! ACL: who (which users) should be able to see or do what in which parts of the site? In many cases, you might not need to go beyond the default setup and just use the default users groups and permissions that are already present when you install Joomla! 1.6. So, before we go and find out how you can craft custom user groups and their distinctive sets of permissions, let's have a look at the default Joomla! 1.6 ACL setup. The default site-wide settings In broad terms, the default groups and permissions present in Joomla! 1.6 are much like the ACL system that was available in Joomla! 1.5. To view the default groups and their permissions, go to Site | Global Configuration | Permissions. The Permission Settings screen is displayed, showing a list of User Groups. A user group is a collection of users sharing the same permissions, such as Public, Manager, or Administrator. By default the permission settings of the Public user group are shown; clicking any of the other user group names reveals the settings for that particular group. On the right-hand side of the Permission Settings screen, the generic (site-wide) action permissions for this group are displayed: Site Login, Admin Login, and so on. Actions are the things users are allowed do on the site. For the sample user groups, these action permissions have already been set. Default user groups Let's find out what these default user groups are about. We'll discuss the user groups from the most basic level (Public) to the most powerful (Super Users). Public – the guest group This is the most basic level; anyone visiting your site is considered part of the Public group. Members of the Public group can view the frontend of the site, but they don't have any special permissions. Registered – the user group that can log in Registered users are regular site visitors, except for the fact that they are allowed to log in to the frontend of the site. After they have logged in with their account details, they can view content that may be hidden from ordinary site visitors because the Access level of that content has been set to Registered. This way, Registered users can be presented all kinds of content ordinary (Public) users can't see. Registered users, however, can't contribute content. They're part of the user community, not web team. Author, Editor, Publisher – the frontend content team Authors, Editors, and Publishers are allowed to log in to the frontend, to edit or add articles. There are three types of frontend content contributors, each with their specific permission levels: Authors can create new content for approval by a Publisher or someone higher in rank. They can edit their own articles, but can't edit existing articles created by others. Editors can create new articles and edit existing articles. A Publisher or higher must approve their submissions. Publishers can create, edit, and publish, unpublish, or trash articles in the frontend. They cannot delete content. Manager, Administrator, Super User – the backend administrators Managers, Administrators and Super Users are allowed to log in to the backend to add and manage content and to perform administrative tasks. Managers can do all that Publishers can, but they are also allowed to log in to the backend of the site to create, edit, or delete articles. They can also create and manage categories. They have limited access to administration functions. Administrators can do all that Managers can and have access to more administration functions. They can manage users, edit, or configure extensions and change the site template. They can use manager screens (User Manager, Article Manager, and so on) and can create, delete, edit, and change the state of users, articles, and so on. Super Users can do everything possible in the backend. (In Joomla! 1.5, this user group type was called Super Administrator). When Joomla! is installed, there's always one Super User account created. That's usually the person who builds and customizes the website. In the current example website, you're the Super User. Shop Suppliers and Customers – two sample user groups You'll notice two groups in the Permission Settings screen that we haven't covered yet: Shop Suppliers and Customer. These are added when you install the Joomla! 1.6 sample data. These aren't default user groups; they are used in the sample Fruit Shop site to show how you can create customized groups. Are there also sample users available? As there are user groups present in the sample data, you might expect there are also sample users. This is not the case. There are no (sample) users assigned to the sample user groups. There's just one user available after you've installed Joomla!— you. You can view your details by navigating to Users | User Manager. You're taken to the User Manager: Users screen: Here you can see that your name is Super User, your user name is admin (unless you've changed this yourself when setting up your account), and you're part of the user group called Super Users. There's also a shortcut available to take you to your own basic user settings: click on Site | My Profile or—even faster—just click on the Edit Profile shortcut in the Control Panel. However, you can't manage user permissions here; the purpose of the My Profile screen is only to manage basic user settings. Action Permissions: what users can do We've now seen what types of users are present in the default setup of Joomla! 1.6. The action permissions that you can grant these user groups—things they can do on the site—are shown per user group in the Site | Global Configuration | Permissions screen. Click on any of the user group names to see the permission settings for that group: You'll also find these permissions (such as Site Login, Create, Delete, Edit) on other places in the Joomla! interface: after all, you don't just apply permissions on a site-wide basis (as you could in previous versions of Joomla!), but also on the level of components, categories, or individual items. To allow or deny users to do things, each of the available actions can be set to Allowed or Denied for a specific user group. If the permission for an action isn't explicitly allowed or denied, it is Not Set. Permissions are inherited You don't have to set each and every permission on every level manually: permissions are inherited between groups. That is, a child user group automatically gets the permissions set for its parent. Wait a minute—parents, children, inheritance ... how does that work? To understand these relationships, let's have a look at the overview of user groups in the Permission Settings screen. This shows all available user groups (I've edited this screen image a little to be able to show all the user groups in one column): You'll notice that all user group names are displayed indented, apart from Public. This indicates the permissions hierarchy: Public is the parent group, Manager (indented one position) is a child of Public, Administrator (indented two positions) is a child of Manager. Permissions for a parent group are automatically inherited by all child groups (unless these permissions are explicitly set to Allowed or Denied to "break" the inheritance relationship). In other words: a child group can do anything a parent group can do—and more, as it is a child and therefore has its own specific permissions set. For example, as Authors are children of the Registered group, they inherit the permissions of the Registered group (that is, the permission to log in to the frontend of the site). Apart from that, Authors have their own specific permissions added to the permissions of the Registered group. Setting an action to Denied is very powerful: you can't allow an action for a lower level in the permission hierarchy if it is set to Denied higher up in the hierarchy. So, if an action is set to Denied for a higher group, this action will be inherited all the way down the permissions "tree" and will always be denied for all lower levels—even if you explicitly set the lower level to Allowed.  
Read more
  • 0
  • 0
  • 2741
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-making-content-findable-drupal-6
Packt
20 Oct 2009
5 min read
Save for later

Making Content Findable in Drupal 6

Packt
20 Oct 2009
5 min read
What you will learn In this article, you will learn about: Using Taxonomy to link descriptive terms to Node Content Tag clouds Path aliases What you will do In this article, you will: Create a Taxonomy Enable the use of tags with Node Content Define a custom URL Activate site searching Perform a search Understanding Taxonomy One way to find content on a site is by using a search function, but this can be considered as a hit-or-miss approach. Searching for an article on 'canines' won't return an article about dogs, unless it contains the word 'canines'. Certainly, navigation provides a way to navigate the site, but unless your site has only a small amount of content, the navigation can only be general in nature. Too much navigation is annoying. There are far too many sites with two or three sets of top navigation, plus left and bottom navigation. It's just too much to take in and still feel relaxed. Site maps offer additional navigation assistance, but they're usually not fun to read, and are more like a Table of Contents, where you have to know what you're looking for. So, what's the answer?—Tags! A Tag is simply a word or a phrase that is used as a descriptive link to content. In Drupal, a collective set of terms, from which terms or tags are associated with content, is called a Vocabulary. One or more Vocabularies comprise a Taxonomy. This a good place to begin, so let's create a Vocabulary. Activity 1: Creating a Taxonomy Vocabulary In this activity, we will be adding two terms to our Vocabulary. We shall also learn how to assign a Taxonomy to Node Content that has been created. We begin in the Content management area of the admin menu. There, you should find the Taxonomy option listed, as shown in the following screenshot. Click on this option. Taxonomy isn't listed in my admin menuThe Taxonomy module is not enabled by default. Check on the Modules page (Admin | Site building | Modules) and make sure that the module is enabled. For the most part, modules can be thought of as options that can be added to your Drupal site, although some of them are considered essential. Some modules come pre-installed with Drupal. Among them, some are automatically activated, and some need to be activated manually. There are many modules that are not included with Drupal, but are available freely from the Drupal web site. The next page gives us a lengthy description of the use of a taxonomy. At the top of the page are two options, List and Add vocabulary. We'll choose the latter. On the Add vocabulary page, we need to provide a Vocabulary name. We can create several vocabularies, each for a different use. For example, with this site, we could have a vocabulary for 'Music' and another for 'Meditation'. For now, we'll just create one vocabulary, and name it Tags, as suggested below, in the Vocabulary name box. In the Description box, we'll type This vocabulary contains Tag terms. In the Help text box, we'll type Enter one or more descriptive terms separated by commas. Next is the [Node] Content types section. This lists the types of Node Content that are currently defined. Each has a checkbox alongside it. Selecting the checkbox indicates that the associated Node Content type can have Tags from this vocabulary assigned to it. Ultimately, it means that if a site visitor searches using a Tag, then this type of Node Content might be offered as a match. We will be selecting all of the checkboxes. If a new Node Content type is created that will use tags, then edit the vocabulary and select the checkbox. The Settings section defines how we will use this vocabulary. In this case, we want to use it with tags, so we will select the Tags checkbox. The following screenshot shows the completed page. We'll then click on the Save button. At this point, we have a vocabulary, as shown in the screenshot, but it doesn't contain anything. We need to add something to it, so that we can use it. Let's click on the add terms link. On the Add term page, we're going to add two terms, one at a time. First we'll type healing music into the Term name box. We'll purposely make the terms lower case, as it will look better in the display that we'll be creating soon. We'll click on the Save button, and then repeat the procedure for another term named meditation. The method we used for adding terms is acceptable when creating new terms that have not been applied to anything, yet. If the term does apply to existing Node Content, then a better way to add it is by editing that content. We'll edit the Page we created, entitled Soul Reading. Now that the site has the Taxonomy module enabled, a new field named Tags appears below Title. We're going to type soul reading into it. This is an AJAX field. If we start typing a tag that already exists, then it will offer to complete the term. AJAX (Asynchronous JavaScript + XML) is a method of using existing technologies to retrieve data in the background. What it means to a web site visitor is that data can be retrieved and presented on the page being viewed, without having to reload the page. Now, we can Save our Node Content, and return to the vocabulary that we created earlier. Click on the List tab at the top of the page. Our terms are listed, as shown in the following screenshot.  
Read more
  • 0
  • 0
  • 2734

article-image-video-blogging-wordpress
Packt
16 Apr 2010
7 min read
Save for later

Video Blogging in Wordpress

Packt
16 Apr 2010
7 min read
Introduction Video is a major component of the Web today. Luckily, WordPress makes it easy to publish and share video. In this article, we will demonstrate ways of working with video in WordPress and in Flash. You will learn how to embed an .flv file, create an .xml video sitemap, and distribute your videos to sites such as YouTube and Blip. We will also show you how to set up Free WP Tube, a free video blogging theme that allows you to run a video web log (vlog). FLV Embed (Version 1.2.1) If you want to embed .flv files, use a Flash video player, and/or publish a video sitemap, this compact plugin does all three. The homepage is http://www.channel-ai.com/blog/plugins/flv-embed/. FLV Embed uses standards compliant, valid XHTML, and JavaScript. It is based on the JW FLV Media Player, whose homepage is http://www.longtailvideo.com/players/jw-flvplayer/ FLV Embed supports Google video sitemap generation, allowing you to describe, syndicate, and distribute your video content, facilitating indexing in Google video search. If a user is missing Flash or has disabled JavaScript, he or she is provided unobtrusive and appropriate on-screen instructions to correct the problem. Getting ready When a page with video loads, the player displays either the first frame of the video or a thumbnail (referred to as a poster image). The poster image is preferable, especially when a user is choosing between many videos—the first frame of a video may not offer the most representative or compelling description. Your poster image can be a poster or any image you like. Here is an example of our finished product: You will want to think about where you will upload the video files and poster images and,how you will name them. A good place might be wp-content/uploads/video. This plugin requires that you name your poster images the same as your video files. The default image type is jpg, but you can use any valid image file format. All your images must be in the same file format. A batch resize and rename utility is a useful tool. For PC, one free option is the Fast Stone Image Resizer, which you can download at http://www.faststone.org/FSResizerDetail.htm. How to do it... In your dashboard, navigate to Plugins | Add New. Search for "FLV Embed". Click on Install, then on Activate. Visit the plugin configuration panel at Settings | FLV Embed. In the Sitemap menu, check the first box to Enable sitemap feature and automatic custom field addition. FLV Embed will now be able to create your video sitemap by automatically adding a custom field each time you use FLV Embed to insert a video. In the Poster menu, check the box to Display poster image for embedded FLV movies. For both of the fields, Path to poster directory and Path to FLV directory, we suggest you leave these blank, and instead use absolute URLs. If you do use relative (site-specific) URLs, keep in mind that a trailing slash is required. An example is /wp-content/uploads/videos/. In the Player menu, you may want to change the colors or add your site logo as a linkable watermark to the video. Review all the Settings, and click on Save Changes. To embed an FLV file, use the following shortcode in HTML view: [flv:url width height]. For example, you could insert a YouTube video at 480 by 360 (using the absolute URL) like this: [flv:http://youtube.com/watch?v=fLV3MB3DpWN 480 360] A YouTube video cannot use a poster image because the file name of a jpg cannot contain a question mark. You can also insert an FLV that you have uploaded (using the relative path) like this: [flv:http://www.wordpressandflash.com/wp-content/uploads/video/swfobject_test.swf 480 360] Once you have inserted the video, FLV Embed automatically populates the FLV custom field with two URLs, as you can see below. The first is the location of the video, and the second is the location of the poster image: To use a custom poster image, upload any image to wp-content/uploads/video, and rename it to match the filename. You can also use an absolute URL if the poster image file is in another location—the filename must still match. To configure your video XML sitemap, visit the Video Sitemap Options menu by clicking on Settings | Video Sitemap. Here, you can get or modify the feed address. Our example is http://www.wordpressandflash.com/ videofeed.xml You can also adjust additional optional settings, and if you have made any changes to the settings or content and need to rebuild the sitemap or update your custom fields, you can do that here too. How it works... The video sitemap is an extension of the XML sitemap. A video sitemap allows you to publish and syndicate online video content, including descriptive metadata to tag your content for Google Video search. Adding details, such as a title and description, makes it easier for users who are searching to find a given piece of content. Your poster image will also be included as a clickable thumbnail image. The user will be directed to your website to see the video. If FLV Embed cannot automatically generate the XML file, you can simply copy the XML file from the demo and save it to your server. Make sure to set the file permissions to write (664 or 666) by context-clicking in your FTP client and modifying the File Attributes, as seen below: Then, make the appropriate changes to the Video sitemap filename field in the Video Sitemap Options menu, directing the plugin to the XML file you have prepared, and rebuild the sitemap. Here is what your finished feed will look like: There's more... The videofeed.xml file has a simple structure. The first three tags specify encoding, styling, and the video sitemap protocol: <?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl"href="http://www.wordpressandflash.com/wp-content/plugins/flv-embed/sitemap.xsl"?><urlset > Next, a <url> tag wraps each piece of content, which includes a <loc> tag (a link to the content on your site) and a <videogt; tag. The <videogt; tag contains additional tags that specify the video location, the video player location, the poster image location, a title, and description: <url> <loc>http://www.wordpressandflash.com/flv-embed/</loc> <video:video> <video:content_loc>http://www.wordpressandflash.com/wp-content/uploads/video/swfobject_test.swf</video:content_loc> <video:player_loc allow_embed="No">http://www.wordpressandflash.com/wp-content/plugins/flv-embed/flvplayer.swf?file=/wp-content/uploads/video/swfobject_test.swf</video:player_loc> <video:thumbnail_loc>http://www.wordpressandflash.com/wp-content/uploads/videos/swfobject_test.jpg</video:thumbna il_loc> <video:title><![CDATA[FLV Embed]]></video:title> <video:description><![CDATA[FLV Embed is a great plugin thatallows you to display FLV files &#8212; and most other video formats&#8212; in a compact Flash video player. Visit the homepage at:http://www.channel-ai.com/blog/plugins/flv-embed/. Check out thevideo sitemap!Get the latest Flash Player to see this player.[Javascript required to view Flash movie, please turn it [...]]]></video:description> </video:video></url></urlset> With this info, you can manually create a .xml video feed for any site, without a plugin. Commercial use? Commercial use does require a license. A free alternative for commercial use is the Hana FLV Player, whose homepage is http://www.neox.net/w/2008/05/25/hana-flv-playerwordpress-plugin/.
Read more
  • 0
  • 0
  • 2734

article-image-deploying-your-dotnetnuke-portal
Packt
21 Oct 2009
7 min read
Save for later

Deploying Your DotNetNuke Portal

Packt
21 Oct 2009
7 min read
Acquiring a Domain Name One of the most exciting parts of starting a website is acquiring a domain name. When selecting the perfect name there are a few things that you need to keep in mind: Keep it brief: The more letters that a user has to type in to get to your site the more difficult it is going to be for them to remember your site. The name you select will help to brand your site. If it is catchy then people will remember it more readily. Have alternative names in mind: As time goes on, great domain names are becoming fewer and fewer. Make sure you have a few alternatives to choose from. The first domain name you had in mind may already be taken so having a backup plan will help when you decide to purchase a name. Consider buying additional top-level domain names: Say you've already bought www.DanielsDoughnuts.com. You might want to purchase www.DanielsDoughnuts.net as well to protect your name. Once you have decided on the name you want for your domain, you will need to register it. There are dozens of different sites that allow you to register your domain name as well as search to see if it is available. Some of the better-known domain-registration sites are Register.com and NetworkSolutions.com. Both of these have been around a long time and have good reputations. You can also look into some of the discount registers like BulkRegister (http://www.BulkRegister.com) or Enom (http://www.enom.com). After deciding on your domain name and having it registered, you will need to find a place to physically host your portal. Most registration services will also give the ability to host your site with them but it is best to search for a provider that fits your site's needs. Finding a Hosting Provider When deciding on a provider to host your portal, you will need to consider a few things: Cost: This is of course one of the most important things to look at when looking for a provider. There are usually a few plans to select from. The basic plan usually allows you a certain amount of disk space for a very small price but has you share the server with numerous other websites. Most providers also offer dedicated (you get the server all to yourself) and semi-dedicated (you share with a few others). It is usually best to start with the basic plan and move up if the traffic on your site requires it. Windows servers: The provider you select needs to have Windows Server 200/2003 running IIS (Internet Information Services). Some hosts run alternatives to Microsoft like Linux and Apache web server. .NET framework: The provider's servers need to have the .NET framework version 1.1 installed. Most hosts have installed the framework on their servers, but not all. Make sure this is available because DotNetNuke needs this to run. Database availability: You will need database server availability to run DotNetNuke and Microsoft SQL Server is the preferred back-end. It is possible to run your site off Microsoft Access or MySQL (with a purchased provider), but I would not suggest it. Access does not hold up well in a multi-user platform and will slow down considerably when your traffic increases. Also, since most module developers target MS SQL, MySQL, while able to handle multiple users, does not have the module support. FTP access: You will need a way to post your DotNetNuke portal files to your site and the easiest way is to use FTP. Make sure that your host provides this option. E-mail server: A great deal of functionality associated with the DotNetNuke portal relies on being able to send out e-mails to users. Make sure that you will have the availability of an e-mail server. Folder rights: The ASPNET or NetworkService Account (depending on server) will need to have full permissions to the root and subfolders for your DotNetNuke application to run correctly. Make sure that your host either provides you with the ability to set this or is willing to set this up for you. We will discuss the exact steps later in this article. The good news is that you will have plenty of hosting providers to choose from and it should not break the bank. Try to find one that fits all of your needs. There are even some hosts (www.WebHost4life.com) that will install DotNetNuke for you free of charge. They host many DotNetNuke sites and are familiar with the needs of the portal. Preparing Your Local Site Once you have your domain name and a provider to host your portal, you will need to get your local site ready to be uploaded to your remote server. This is not difficult, but make sure you cover all of the following steps for a smooth transition. Modify the compilation debug setting in the web.config file: You will need to modify your web.config file to match the configuration of the server to which you will be sending your files. The first item that needs to be changed is the debug configuration. This should be set to false. You should also rebuild your application in release mode before uploading. This will remove the debug tokens, perform optimizations in the code, and help the site to run faster: <!-- set debugmode to false for running application --> <compilation debug="false" /> Modify the data-provider information in the web.config file: You will need to change the information for connecting to the database so that it will now point to the server on your host. There are three things to look out for in this section (changes shown overleaf): First, if you are using MS SQL, make sure SqlDataProvider is set up as the default provider. Second, change the connection string to reflect the database server address, the database name (if not DotNetNuke), as well as the user ID and password for the database that you received from your provider. Third, if you will be using an existing database to run the DotNetNuke portal, add an objectQualifier. This will append whatever you place in the quotations to the beginning of all of the tables and procedures that are created for your database. <data defaultProvider=" SqlDataProvider" > <providers> <clear/> <add name = "SqlDataProvider" type = "DotNetNuke.Data.SqlDataProvider, DotNetNuke.SqlDataProvider" connectionStringname = "Server=MyServerIP;Database=DotNetNuke; uid=myID;pwd=myPWD;" providerPath = "~ProvidersDataProvidersSqlDataProvider" objectQualifier = "DE" databaseOwner = "dbo" upgradeConnectionString = "" />   Modify any custom changes in the web.config file: Since you set up YetAnotherForum for use on our site, we will need to make the modifications necessary to ensure that the forums connect to the hosted database. Change the to point to the database on the server: <yafnet> <dataprovider>yaf.MsSql,yaf</dataprovider> <connstr> user id=myID;password=myPwd;data source=myServerIP;initial catalog=DotNetNuke;timeout=90 </connstr> <root>/DotNetNuke/DesktopModules/YetAnotherForumDotNet/</root> <language>english.xml</language> <theme>standard.xml</theme> <uploaddir>/DotNetNuke/DesktopModules/yetanotherforum.net /upload/</uploaddir> <!--logtomail>email=;server=;user=;pass=;</logtomail--> </yafnet> Add your new domain name to your portal alias: Since DotNetNuke has the ability to run multiple portals we need to tell it which domain name is associated with our current portal. To do this we need to sign on as host (not admin) and navigate to Admin | Site Settings on the main menu. If signed on as host, you will see a Portal Aliases section on the bottom of the page. Click on the Add New HTTP Alias link:
Read more
  • 0
  • 0
  • 2724

article-image-adding-newsletters-web-site-using-drupal-6
Packt
22 Oct 2009
4 min read
Save for later

Adding Newsletters to a Web Site Using Drupal 6

Packt
22 Oct 2009
4 min read
Creating newsletters A newsletter is a great way of keeping customers up-to-date without them needing to visit your web site. Customers appreciate well-designed newsletters because they allow the customer to keep tabs on their favorite places without needing to check every web site on a regular basis. Creating a newsletter Good Eatin' Goal: Create a new newsletter on the Good Eatin' site, which will contain relevant news about the restaurant, and will be delivered quarterly to subscribers. Additional modules needed: Simplenews (http://drupal.org/project/simplenews). Basic steps Newsletters are containers for individual issues. For example, you could have a newsletter called Seasonal Dining Guide, which would have four issues per year (Summer, Fall, Winter, and Spring). A customer subscribes to the newsletter and each issue is sent to them as it becomes available. Begin by installing and activating the Simplenews module, as shown below: At this point, we only need to enable the Simplenews module, and the Simplenews action module can be left disabled. Next, select Content management and then Newsletters, from the Administer menu. Drupal will display an administration area divided into the following sections: a) Sent issuesb) Draftsc) Newslettersd) Subscriptions Click on the Newsletters tab and Drupal will display a page similar to the following: As you can see, a default newsletter with the name of our site has been automatically created for us. We can either edit this default newsletter or click on the Add newsletter link to create a new newsletter. Let's click the Add newsletter option to create our seasonal newsletter. Drupal will display a standard form where we can enter the name, description, and relative importance (relative importance weight) of the newsletter. Click Save to save the newsletter. It will now appear in the list of available newsletters. If you want to modify the Sender information for the newsletter to use an alternate name or email address to your site's default ones, you can either expand the Sender information section when adding the newsletter, or you click Edit newsletter and modify the Sender information, as shown in the following screenshot: Allowing users to sign-up for the newsletter Good Eatin' Goal: Demonstrate how registered and unregistered users can sign-up for a newsletter, and configure the registration process. Additional modules needed: Simplenews (http://drupal.org/project/simplenews). Basic steps To allow customers to sign-up for the newsletter, we will begin by adding a block to the page. Open the Block Manager by selecting Site building and then Blocks, from the Administer menu. Add the block for the newsletter that you want to allow customers to subscribe to, as shown in the following screenshot: We will now need to give users permission to subscribe to newsletters by selecting User management and then Permissions, from the Administer menu. We will give all users permissions to subscribe to newsletters and to view newsletter links, as shown below: If the customer does not have permission to subscribe to newsletters then the block will appear as shown in the following screenshot: However, if the customer has permissions to subscribe to newsletters, and is logged in to the site, the block will appear as shown in the following screenshot: If the customer has permission to subscribe, but is not logged in, the block will appear as follows: To subscribe to the newsletter, the customer will simply click on the Subscribe button. Once they he subscribed, the Subscribe button will change to Unsubscribe so that the user can easily opt out of the newsletter. If the user does not have an active account with the site, they will need to confirm that they want to subscribe to the site.
Read more
  • 0
  • 0
  • 2721
article-image-magento-designs-and-themes
Packt
19 May 2012
13 min read
Save for later

Magento: Designs and Themes

Packt
19 May 2012
13 min read
(For more resources on e-Commerce, see here.) The Magento theme structure The same holds true for themes. You can specify the look and feel of your stores at the Global, Website, or Store levels (themes can be applied for individual store views relating to a store) by assigning a specific theme. In Magento,a group of related themes is referred to as a design package. Design packages contain files that control various functional elements that are common among the themes within the package. By default, Magento Community installs two design packages: Base package: A special package that contains all the default elements for a Magento installation (we will discuss this in more detail in a moment) Default package: This contains the layout elements of the default store (look and feel) Themes within a design package contain the various elements that determine the look and feel of the site: layout files, templates, CSS, images, and JavaScript. Each design package must have at least one default theme, but can contain other theme variants. You can include any number of theme variants within a design package and use them, for example, for seasonal purposes (that is, holidays, back-to-school, and so on). The following image shows the relationship between design packages and themes: A design package and theme can be specified at the Global, Website or Store levels. Most Magento users will use the same design package for a website and all descendant stores. Usually, related stores within a website business share very similar functional elements, as well as similar style features. This is not mandatory; you are free to specify a completely different design package and theme for each store view within your website hierarchy. The Theme structure Magento divides themes into two group of files: templating and skin. Templating files contain the HTML, PHTML, and PHP code that determines the functional aspects of the pages in your Magento website. Skin files are made of CSS, image, and JavaScript files that give your site its outward design. Ingeniously, Magento further separates these areas by putting them into different directories of your installation: Templating files are stored in the app/design directory, where the extra security of this section protects the functional parts of your site design Skin files are stored within the skin directory (at the root level of the installation), and can be granted a higher permission level, as these are the files that are delivered to a visitor's browser for rendering the page Templating hierarchy Frontend theme template files (the files used to produce your store's pages) are stored within three subdirectories: layout: It contains the XML files that contain the various core information that defines various areas of a page. These files also contain meta and encoding information. template: This stores the PHTML files (HTML files that contain PHP code and processed by the PHP server engine) used for constructing the visual structure of the page. locale: This add files within this directory to provide additional language translations for site elements, such as labels and messages. Magento has a distinct path for storing templating files used for your website: app/design/frontend/[Design Package]/[Theme]/. Skin hierarchy The skin files for a given design package and theme are subdivided into the following: css: This stores the CSS stylesheets, and, in some cases, related image files that are called by CSS files (this is not an acceptable convention, but I have seen some designers do this) images:This contains the JPG, PNG, and GIF files used in the display of your site js: This contains the JavaScript files that are specific to a theme (JavaScript files used for core functionality are kept in the js directory at the root level) The path for the frontend skin files is: skin/frontend/[Design Package]/[Theme]/. The concept of theme fallback A very important and brilliant aspect of Magento is what is called the Magento theme fallback model. Basically, this concept means that when building a page, Magento first looks to the assigned theme for a store. If the theme is missing any necessary templating or skin files, Magento then looks to the required default theme within the assigned design package. If the file is not found there, Magento finally looks into the default theme of the Base design package. For this reason, the Base design package is never to be altered or removed; it is the failsafe for your site. The following flowchart outlines the process by which Magento finds the necessary files for fulfilling a page rendering request. This model also gives the designers some tremendous assistance. When a new theme is created, it only has to contain those elements that are different from what is provided by the Base package. For example, if all parts of a desired site design are similar to the Base theme, except for the graphic appearance of the site, a new theme can be created simply by adding new CSS and image files to the new theme (stored within the skin directory). Any new CSS files will need to be included in the local.xml file for your theme (we will discuss the local.xml file later in this article). If the design requires different layout structures, only the changed layout and template files need to be created; everything that remains the same need not be duplicated. While previous versions of Magento were built with fallback mechanisms, only in the current versions has this become a true and complete fallback. In the earlier versions, the fallback was to the default theme within a package, not to the Base design package. Therefore, each default theme within a package had to contain all the files of the Base package. If Magento base files were updated in subsequent software versions, these changes had to be redistributed manually to each additional design package within a Magento installation. With Magento CE 1.4 and above, upgrades to the Base package automatically enhance all design packages. If you are careful not to alter the Base design package, then future upgrades to the core functionality of Magento will not break your installation. You will have access to the new improvements based on your custom design package or theme, making your installation virtually upgrade proof. For the same reason, never install a custom theme inside the Base design package. Default installation design packages and themes In a new, clean Magento Community installation, you are provided with the following design packages and themes: Depending on your needs, you could add additional a custom design packages, or custom themes within the default design package: If you're going to install a group of related themes, you should probably create a new design package, containing a default theme as your fallback theme On the other hand, if you're using only one or two themes based on the features of the default design package, you can install the themes within the default design package hierarchy I like to make sure that whatever I customize can be undone, if necessary. It's difficult for me to make changes to the core, installed files; I prefer to work on duplicate copies, preserving the originals in case I need to revert back. After re-installing Magento for the umpteenth time because I had altered too many core files, I learned the hard way! As Magento Community installs a basic variety of good theme variants from which to start, the first thing you should do before adding or altering theme components is to duplicate the default design package files, renaming the duplicate to an appropriate name, such as a description of your installation (for example, Acme or Sports). Any changes you make within this new design package will not alter the originally installed components, thereby allowing you to revert any or all of your themes to the originals. Your new theme hierarchy might now look like this: When creating new packages, you also need to create new folders in the /skin directory to match your directory hierarchy in the /app/design directory. Likewise, if you decide to use one of the installed default themes as the basis for designing a new custom theme, duplicate and rename the theme to preserve the original as your fallback. The new Blank theme A fairly recent default installed theme is Blank. If your customization to your Magento stores is primarily one of colors and graphics, this is not a bad theme to use as a starting point. As the name implies, it has a pretty stark layout, as shown in the following screenshot. However, it does give you all the basic structures and components. Using images and CSS styles, you can go a long way to creating a good-looking, functional website, as shown in the next screenshot for www.aviationlogs.com: When duplicating any design package or theme, don't forget that each of them is defined by directories under /app/design/frontend/ and /skin/frontend/ Installing third-party themes In most cases, Magento users who are beginners will explore hundreds of the available Magento themes created by third-party designers. There are many free ones available, but most are sold by dedicated designers. Shopping for themes One of the great good/bad aspects of Magento is the third-party themes. The architecture of the Magento theme model gives knowledgeable theme designers tremendous abilities to construct themes that are virtually upgrade proof, while possessing powerful enhancements. Unfortunately, not all designers have either upgraded older themes properly or created new themes fully honoring the fallback model. If the older fallback model is still used for current Magento versions, upgrades to the Base package could adversely affect your theme. Therefore, as you review third-party themes, take time to investigate how the designer constructs their themes. Most provide some type of site demo. As you learn more about using themes, you'll find it easier to analyze third-party themes. Apart from a few free themes offered through the Magento website, most of them require that you install the necessary files manually, by FTP or SFTP to your server. Every third-party theme I have ever used has included some instructions on how to install the files to your server. However, allow me to offer the following helpful guidelines: When using FTP/SFTP to upload theme files, use the merge function so that only additional files are added to each directory, instead of replacing entire directories. If you're not sure whether your FTP client provides merge capabilities, or not sure how to configure for merge, you will need to open each directory in the theme and upload the individual files to the corresponding directories on your server. If you have set your CSS and JavaScript files to merge, under System | Configuration | Developer, you should turn merging off while installing and modifying your theme. After uploading themes or any component files (for example, templates, CSS, or images), clear the Magento caches under System | Cache Management in your backend. Disable your Magento cache while you install and configure themes. While not critical, it will allow you to see changes immediately instead of having to constantly clear the Magento cache. You can disable the cache under System | Cache Management in the backend. If you wish to make any changes to a theme's individual file, make a duplicate of the original file before making your changes. That way, if something goes awry, you can always re-install the duplicated original. If you have followed the earlier advice to duplicate the Default design package before customizing, instructions to install files within /app/design/frontend/default/ and /skin/frontend/default/ should be interpreted as /app/design/frontend/[your design package name]/ and /skin/frontend/[your design package name]/, respectively. As most of the new Magento users don't duplicate the Default design package, it's common for theme designers to instruct users to install new themes and files within the Default design package. (We know better, now, don't we?) Creating variants Let's assume that we have created a new design package called outdoor_package. Within this design package, we duplicate the Blank theme and call it outdoor_theme. Our new design package file hierarchy, in both /app/design/ and /skin/frontend/ might resemble the following hierarchy: app/ design/ frontend/ default/ blank/ modern/ iphone/ outdoor_package/ outdoor_theme/ skin/ frontend/ default/ blank/ blue/ french/ german/ modern/ iphone/ outdoor_package/ outdoor_theme/ However, let's also take one more customization step here. Since Magento separates the template structure from the skin structure—the layout from the design, so to speak—we could create variations of a theme that are simply controlled by CSS and images, by creating more than one skin. For Acme, we might want to have our English language store in a blue color scheme, but our French language store in a green color scheme. We could take the acme/skin directory and duplicate it, renaming both for the new colors: app/ design/ frontend/ default/ blank/ modern/ iphone/ outdoor_package/ outdoor_theme/ skin/ frontend/ default/ blank/ blue/ french/ german/ modern/ iphone/ outdoor_package/ outdoor_blue/ outdoor_green/ Before we continue, let's go over something which is especially relevant to what we just created. For our outdoor theme, we created two skin variants: blue and green. However, what if the difference between the two is only one or two files? If we make changes to other files that would affect both color schemes, but which are otherwise the same for both, this would create more work to keep both color variations in sync, right? Remember, with the Magento fallback method, if your site calls on a file, it first looks into the assigned theme, then the default theme within the same design package, and, finally, within the Base design package. Therefore, in this example, you could use the default skin, under /skin/frontend/outdoor_package/default/ to contain all files common to both blue and green. Only include those files that will forever remain different to each of them within their respective skin directories. Assigning themes As mentioned earlier, you can assign design packages and themes at any level of the GWS hierarchy. As with any configuration, the choice depends on the level you wish to assign control. Global configurations affect the entire Magento installation. Website level choices set the default for all subordinant store views, which can also have their own theme specifics, if desired. Let's walk through the process of assigning custom design package and themes. For the sake of this exercise, let's continue with our Outdoor theme, as described earlier.Refer to the following screenshot: We're going to now assign our Outdoor theme to a Outdoor website and store views. Our first task is to assign the design package and theme to the website as the default for all subordinant store views: Go to System | Configuration | General | Design in your Magento backend. In the Current Configuration Scope drop-down menu, choose Outdoor Products. As shown in the following screenshot, enter the name of your design package, template, layout, and skin. You will have to uncheck the boxes labeled Use Default beside each field you wish to use. Click on the Save Config button. The reason you enter default in the fields, as shown in the previous screenshot, is to provide the fallback protection I described earlier. Magento needs to know where to look for any files that may be missing from your theme files.
Read more
  • 0
  • 0
  • 2713

article-image-customizing-look-and-feel-uag
Packt
22 Feb 2012
8 min read
Save for later

Customizing Look and Feel of UAG

Packt
22 Feb 2012
8 min read
(For more resources on Microsoft Forefront UAG, see here.) Honey, I wouldn't change a thing! We'll save the flattery for our spouses, and start by examining some key areas of interest what you might want and be able to change on a UAG implementation. Typically, the end user interface is comprised of the following: The Endpoint Components Installation page The Endpoint Detection page The Login page The Portal Frame The Portal page The Credentials Management page The Error pages There is also a Web Monitor, but it is typically only used by the administrator, so we won't delve into that. The UAG management console itself and the SSL-VPN/SSTP client-component user interface are also visual, but they are compiled code, so there's not much that can be done there. The elements of these pages that you might want to adjust are the graphics, layout, and text strings. Altering a piece of HTML or editing a GIF in Photoshop to make it look different may sound trivial, but there's actually more to it than that, and the supportability of your changes should definitely be questioned on every count. You wouldn't want your changes to disappear upon the next update to UAG, would you? Nor would you look the page to suddenly become all crooked because someone decided that he wants the RDP icon to have an animation from the Smurfs. The UI pages Anyone familiar with UAG will know of its folder structure and the many files that make up the code and logic that is applied throughout. For those less acquainted however, we'll start with the two most important folders you need to know—InternalSite and PortalHomePage. InternalSite contains pages that are displayed to the user as part of the login and logout process, as well as various error pages. PortalHomePage contains the files that are a part of the portal itself, shown to the user after logging in. The portal layout comes in three different flavors, depending on the client that is accessing it. The most common one is the Regular portal, which happens to be the more polished version of the three, shown to all computers. The second is the Premium portal, which is a scaled-down version designed for phones that have advanced graphic capabilities, such as Windows Mobile phones. The third is the Limited portal, which is a text-based version of the portal, shown to phones that have limited or no graphic capabilities, such as the Nokia S60 and N95 handsets. Regardless of the type, the majority of devices connecting to UAG will present a user-agent string in their request and it is this string that determines the type of layout that UAG will use to render its pages and content. UAG takes advantage of this, by allowing the administrator to choose between the various formats that are made available, on a per application basis. The results are pretty cool and being able to cater for most known platforms and form factors, provides users with the best possible experience. The following screenshot illustrates an application that is enabled for the Premium portal, and how the portal and login pages would look on both a premium device and on a limited device: Customizing the login and admin pages The login and admin pages themselves are simple ASP pages, which contain a lot of code, as well as some text and visual elements. The main files in InternalSite that may be of interest to you are the following: Login.asp LogoffMsg.asp InstallAndDetect.asp Validate.asp PostValidate.asp InternalError.asp In addition, UAG keeps another version of some of the preceding files for ADFS, OTP, and OWA under similarly named folders. This means that if you have enabled the OWA theme on your portal, and you wish to customize it, you should work with the files under the /InternalSite/OWA folder. Of course, there are many other files that partake in the flow of each process, but the fact is there is little need to touch either the above or the others, as most of the appearance is controlled by a CSS template and text strings stored elsewhere. Certain requirements may even involve making significant changes to the layout of the pages, and leave you with no other option but to edit core ASP files themselves, but be careful as this introduces risk and is not technically supported. It's likely that these pages change with future updates to UAG, and that may cause a conflict with the older code that is in your files. The result of mixing old and new code is unpredictable, to say the least. The general appearance of the various admin pages is controlled by the file /InternalSite/CSS/template.css. This file contains about 80 different style elements including some of the 50 or so images displayed in the portal pages, such as the gradient background, the footer, and command buttons to name a few. The images themselves are stored in /InternalSite/Images. Both these folders have an OWA folder, which contains the CSS and images for the OWA theme. When editing the CSS, most of the style names will make sense, but if you are not sure, then why not copy the relevant ASP file and the CSS to your computer, so you can take a closer look with a visual editor, to better understand the structure. If you are doing this be careful not to make any changes that may alter the code in a damaging way, as this is easily done and can waste a lot of valuable time. A very useful piece of advice for checking tweaked code is to consider the use of Internet Explorer's integrated developer tool. In case you haven't noticed, it's a simple press of F12 on the keyboard and you'll find everything you need to get debugging. IE 9 and higher versions even pack a nifty trace module that allows you to perform low-level inspection on client-server interaction, without the need for additional third-party tools. We don't intend to devote this book to CSS, but one useful CSS element to be familiar with is the display: none; element, which can be used to hide any element it's put in. For example, if you add this to the .button element, it will hide the Login button completely. A common task is altering the part of the page where you see the Application and Network Access Portal text displayed. The text string itself can be edited using the master language files, which we will discuss shortly. The background of that part of the page, however, is built with the files headertopl.gif, headertopm.gif, and headertopr.gif. The original page design is classic HTML—it places headertopl on the left, headertopr on the right, and repeats headertopm in between to fill the space. If you need to change it, you could simply design a similar layout and put the replacement image files in /InternalSite/Images/CustomUpdate. Alternatively, you might choose to customize the logo only by copying the /InternalSite/Samples/logo.inc file into the /InternalSite/Inc/CustomUpdate folder, as this is where the HTML code that pertains to that area is located. Another thing that's worth noting is that if you create a custom CSS file, it takes effect immediately, and there's no need to do an activation. Well at least for the purposes of testing anyway. The same applies for image file changes too but as a general rule you should always remember to activate when finished, as any new configurations or files will need to be pushed into the TMG storage. Arrays are no exception to this rule either and you should know that custom files are only propagated to array members during an activation, so in this scenario, you do need to activate after each change. During development, you may copy the custom files to each member node manually to save time between activations, or better still, simply stop NLB on all array members so that all client traffic is directed to the one you are working on. An equally important point is that when you test changes to the code, the browser's cache or IIS itself may still retain files from the previous test or config, so if changes you've made do not appear first time around, then start by clearing your browser's cache and even reset IIS, before assuming you messed up the code. Customizing the portal As we said earlier, the pages that make up a portal and its various flavors are under the PortalHomePage folder. These are all ASP.NET files (.ASPX), and the scope for making any alterations here is very limited. However, the appearance is mostly controlled via the file /InternalSite/PortalHomePage/Standard.Master, which contains many visual parameters that you can change. For example, the DIV with ID content has a section pertaining to the side bar application list. You might customize the midTopSideBarCell width setting to make the bar wider or thinner. You can even hide it completely by adding style="display: none;" to the contentLeftSideBarCell table cell. As always, make sure you copy the master file to CustomUpdate, and not touch the original file, and as with the CSS files, any changes you make take effect immediately. Additional things that you can do with the portal are removing or adding buttons to the portal toolbar. For example, you might add a button to point to a help page that describes your applications, or a procedure to contact your internal technical support in case of a problem with the site.
Read more
  • 0
  • 0
  • 2710

article-image-customization
Packt
29 Aug 2013
18 min read
Save for later

Customization

Packt
29 Aug 2013
18 min read
(For more resources related to this topic, see here.) Now that you've got a working multisite installation, we can start to add some customizations. Customizations can come in a few different forms. You're probably aware of the customizations that can be made via WordPress plugins and custom WordPress themes. Another way we can customize a multisite installation is by creating a landing page that displays information about each blog in the multisite network, as well as displaying information about the author for that individual blog. I wrote a blog post shortly after WordPress 3.0 came out detailing how to set this landing page up. At the time, I was working for a local newspaper and we were setting up a blog network for some of our reporters to blog about politics (being in Iowa, politics are a pretty big deal here, especially around Caucus time). You can find the post at http://www.longren.org/how-to-wordpress-3-0-multi-site-blog-directory/ if you'd like to read it. There's also a blog-directory.zip file attached to the post that you can download and use as a starting point. Before we get into creating the landing page, let's get the really simple stuff out of the way and briefly go over how themes and plugins are managed in WordPress multisite installations. We'll start with themes. Themes can be activated network-wide, which is really nice if you have a theme that you want every site in your blog network to use. You can also activate a theme for an individual blog, instead of activating the theme for the entire network. This is helpful if one or two individual blogs need to have a totally unique theme that you don't want to be available to the other blogs. Theme management You can install themes on a multisite installation the same way you would with a regular WordPress install. Just upload the theme folder to your wp-content/themes folder to install the theme. Installing a theme is only part of the process for individual blogs to use the themes; you'll need to activate them for the entire blog network or for specific blogs. To activate a theme for an entire network, click on Themes and then click on Installed Themes in the Network Admin dashboard. Check the themes that you want to enable, select Network Enable in the Bulk Actions drop-down menu, and then click on the Apply button. That's all there is to activating a theme (or multiple themes) for an entire multisite network. The individual blog owners can apply the theme just as you would in a regular, nonmultisite WordPress installation. To activate a theme for just one specific blog and not the entire network, locate the target blog using the Sites menu option in the Network Admin dashboard. After you've found it, put your mouse cursor over the blog URL or domain. You should see the action menu appear immediately under the blog URL or domain. The action menu includes options such as Edit, Dashboard, and Deactivate. Click on the Edit action menu item and then navigate to the Themes tab. To activate an individual theme, just click on Enable below the theme that you want to activate. Or, if you want to activate multiple themes for the blog, check all the themes you want through the checkboxes on the left-hand side of each theme from the list, select Enable in the Bulk Actions drop-down menu, and then click on the Apply button. An important thing to keep in mind is that themes that have been activated for the entire network won't be shown here. Now the blog administrator can apply the theme to their blog just as they normally would. Plugin management To install a plugin for network use, upload the plugin folder to wp-content/plugins/ as you normally would. Unlike themes, plugins cannot be activated on a per-site basis. As network administrator, you can add a plugin to the Plugins page for all sites, but you can't make a plugin available to one specific site. It's all or nothing. You'll also want to make sure that you've enabled the Plugins page for the sites that need it. You can enable the Plugins page by visiting the Network Admin dashboard and then navigating to the Network Settings page. At the bottom of that page you should see a Menu Settings section where you can check a box next to Plugins to enable the plugins page. Make sure to click on the Save Changes button at the bottom or nothing will change. You can see the Menu Settings section in the following screenshot. That's where you'll want to enable the Plugins page. Enabling the Plugins page After you've ensured that the Plugins page is enabled, specific site administrators will be able to enable or disable plugins as they normally would. To enable a plugin for the entire network go to the Network Admin dashboard, mouse over the Plugins menu item, and then click on Installed Plugins. This will look pretty familiar to you; it looks pretty much like the Installed Plugins page does on a typical WordPress single-site installation. The following screenshot shows the installed Plugins page: Enable plugins for the entire network You'll notice below each plugin there's some text that reads Network Activate. I bet you can guess what clicking that will do. Yes, clicking on the Network Activate link will activate that plugin for the entire network. That's all there is to the basic plugin setup in WordPress multisite. There's another plugin feature that is often overlooked in WordPress multisite, and that's must-use plugins. These are plugins that are required for every blog or site on the network. Must-use plugins can be installed in the wp-content/mu-plugins/ folder but they must be single-file plugins. The files within folders won't be read. You can't deactivate or activate the must-use plugins. If they exist in the mu-plugins folder, they're used. They're entirely hidden from the Plugin pages, so individual site administrators won't even see them or know they're there. I don't think must-use plugins are a commonly used thing, but it's nice information to have just in case. Some plugins, especially domain mapping plugins, need to be installed in mu-plugins and need to be activated before the normal plugins. Third-party plugins and plugins for plugin management We should also discuss some of the plugins that are available for making the management of plugins and themes on WordPress multisite installations a bit easier. One of the most popular plugins is called Multisite Plugin Manager, and is developed by Aaron Edwards of UglyRobot.com. The Multisite Plugin Manager plugin was previously known as WPMU Plugin Manager. The plugin can be obtained from the WordPress Plugin Directory at http://wordpress.org/plugins/multisite-plugin-manager/. Here's a quick rundown of some of the plugin features: Select which plugins specific sites have access to Set certain plugins to autoactivate itself for new blogs or sites Activate/deactivate a plugin on all network sites Assign some special plugin access permissions to specific network sites Another plugin that you may find useful is called WordPress MU Domain Mapping. It allows you to easily map any blog or site to an external domain. You can find this plugin in the WordPress Plugin Directory at http://wordpress.org/plugins/wordpress-mu-domain-mapping/. There's one other plugin I want to mention; the only drawback is that it's not a free plugin. It's called WP Multisite Replicator, and you can probably guess what it does. This plugin will allow you to set up a "template" blog or site and then replicate that site when adding new sites or blogs. The idea is that you'd create a blog or site that has all the features that other sites in your network will need. Then, you can easily replicate that site when creating a new site or blog. It will copy widgets, themes, and plugin settings to the new site or blog, which makes deploying new, identical sites extremely easy. It's not an expensive plugin, costing about $36 at the moment of writing, which is well worth it in my opinion if you're going to be creating lots of sites that have the same basic feature set. WP Multisite Replicator can be found at http://wpebooks.com/replicator/. Creating a blog directory / landing page Now that we've got the basic theme and plugin stuff taken care of, I think it's time to move onto creating a blog directory or a landing page, whichever you prefer to call it. From this point on I'll be referring to it as a blog directory. You can see a basic version of what we're going to make in the following screenshot. The users on my example multisite installation, at http://multisite.longren.org/, are Kayla and Sydney, my wife and daughter. Blog directory example As I mentioned earlier in this article, I wrote a post about creating this blog directory back when WordPress 3.0 was first released in 2010. I'll be using that post as the basis for most of what we'll do to create the blog directory with some things changed around, so this will integrate more nicely into whatever theme you're using on the main network site. The first thing we need to do is to create a basic WordPress page template that we can apply to a newly created WordPress page. This template will contain the HTML structure for the blog directory and will dictate where the blog names will be shown and where the recent posts and blog description will be displayed. There's no reason that you need to stick with the following blog directory template specifically. You can take the code and add or remove various elements, such as the recent post if you don't want to show them. You'll want to implement this blog directory template as a child theme in WordPress. To do that, just make a new folder in wp-content/themes/. I typically name my child theme folders after their parent themes. So, the child theme folder I made was wp-content/themes/twentythirteen-tyler/. Once you've got the child theme folder created, make a new file called style.css and make sure it has the following code at the top: /*Theme Name: Twenty Thirteen Child ThemeTheme URI: http://yourdomain.comDescription: Child theme for the Twenty Thirteen themeAuthor: Your name hereAuthor URI: http://example.com/about/Template: twentythirteenVersion: 0.1.0*//* ================ *//* = The 1Kb Grid = */ /* 12 columns, 60 pixels each, with 20pixel gutter *//* ================ */.grid_1 { width:60px; }.grid_2 { width:140px; }.grid_3 { width:220px; }.grid_4 { width:300px; }.grid_5 { width:380px; }.grid_6 { width:460px; }.grid_7 { width:540px; }.grid_8 { width:620px; }.grid_9 { width:700px; }.grid_10 { width:780px; }.grid_11 { width:860px; }.grid_12 { width:940px; }.column {margin: 0 10px;overflow: hidden;float: left;display: inline;}.row {width: 960px;margin: 0 auto;overflow: hidden;}.row .row {margin: 0 -10px;width: auto;display: inline-block;}.author_bio {border: 1px solid #e7e7e7;margin-top: 10px;padding-top: 10px;background:#ffffff url('images/sign.png') no-repeat right bottom;z-index: -99999;}small { font-size: 12px; }.post_count {text-align: center;font-size: 10px;font-weight: bold;line-height: 15px;text-transform: uppercase;float: right;margin-top: -65px;margin-right: 20px;}.post_count a {color: #000;}#content a {text-decoration: none;-webkit-transition: text-shadow .1s linear;outline: none;}#content a:hover {color: #2DADDA;text-shadow: 0 0 6px #278EB3;} The preceding code adds the styling to your child theme, and also tells WordPress the name of your child theme. You can set a custom theme name if you want by changing the Theme Name line to whatever you like. The only fields in that big comment block that are required are the Theme Name and Template. Template, which should be set to whatever the parent theme's folder name is. Now create another file in your child theme folder and name it blog-directory.php. The remaining blocks of code need to go into that blog-directory.php file: <?php/*** Template Name: Blog Directory** A custom page template with a sidebar.* Selectable from a dropdown menu on the add/edit page screen.** @package WordPress* @subpackage Twenty Thirteen*/?><?php get_header(); ?><div id="container" class="onecolumn"><div id="content" role="main"><?php the_post(); ?><div id="post-<?php the_ID(); ?>" <?php post_class(); ?>><?php if ( is_front_page() ) { ?><h2 class="entry-title"><?php the_title(); ?></h2><?php } else { ?><h1 class="entry-title"><?php the_title(); ?></h1><?php } ?><div class="entry-content"><!-- start blog directory --><?php// Get the authors from the database ordered randomlyglobal $wpdb;$query = "SELECT ID, user_nicename from $wpdb->users WHERE ID != '1'ORDER BY 1 LIMIT 50";$author_ids = $wpdb->get_results($query);// Loop through each authorforeach($author_ids as $author) {// Get user data$curauth = get_userdata($author->ID);// Get link to author page$user_link = get_author_posts_url($curauth->ID);// Get blog details for the authors primary blog ID$blog_details = get_blog_details($curauth->primary_blog);$postText = "posts";if ($blog_details->post_count == "1") {$postText = "post";}$updatedOn = strftime("%m/%d/%Y at %l:%M %p",strtotime($blog_details->last_updated));if ($blog_details->post_count == "") {$blog_details->post_count = "0";}$posts = $wpdb->get_col( "SELECT ID FROM wp_".$curauth->primary_blog."_posts WHERE post_status='publish' AND post_type='post' ANDpost_author='$author->ID' ORDER BY ID DESC LIMIT 5");$postHTML = "";$i=0;foreach($posts as $p) {$postdetail=get_blog_post($curauth->primary_blog,$p);if ($i==0) {$updatedOn = strftime("%m/%d/%Y at %l:%M %p",strtotime($postdetail->post_date));}$postHTML .= "&#149; <a href="$postdetail->guid">$postdetail->post_title</a><br />";$i++;}?> The preceding code sets up the theme and queries the WordPress database for authors. In WordPress multisite, users who have the Author permission type have a blog on the network. There's also code for grabbing posts from each of the network sites for displaying the recent posts from them: <div class="author_bio"><div class="row"><div class="column grid_2"><a href="<?php echo $blog_details->siteurl; ?>"><?php echo get_avatar($curauth->user_email, '96','http://www.gravatar.com/avatar/ad516503a11cd5ca435acc9bb6523536'); ?></a></div><div class="column grid_6"><a href="<?php echo $blog_details->siteurl; ?>" title="<?php echo$curauth->display_name; ?> - <?=$blog_details->blogname?>"><?php //echo $curauth->display_name; ?> <?=$curauth->display_name;?></a><br /><small><strong>Updated <?=$updatedOn?></strong></small><br /><?php echo $curauth->description; ?></div><div class="column grid_3"><h3>Recent Posts</h3><?=$postHTML?></div></div><span class="post_count"><a href="<?php echo $blog_details->siteurl;?>" title="<?php echo $curauth->display_name; ?>"><?=$blog_details->post_count?><br /><?=$postText?></a></span></div><?php } ?><!-- end blog directory --><?php wp_link_pages( array( 'before' => '<div class="page-link">' .__( 'Pages:', 'twentythirteen' ), 'after' => '</div>' ) ); ?><?php edit_post_link( __( 'Edit', 'twentythirteen' ), '<spanclass="edit-link">', '</span>' ); ?></div><!-- .entry-content --></div><!-- #post-<?php the_ID(); ?> --><?php comments_template( '', true ); ?></div><!-- #content --></div><!-- #container --><?php //get_sidebar(); ?><?php get_footer(); ?> Once you've got your blog-directory.php template file created, we can get actually started by setting up the page to serve as our blog directory. You'll need to set the root site's theme to your child theme; do it just as you would on a nonmultisite WordPress installation. Before we go further, let's create a couple of network sites so we have something to see on our blog directory. Go to the Network Admin dashboard, mouse over the Sites menu option in the left-hand side menu, and then click on Add New. If you're using a directory network type, as I am, the value you enter for the Site Address field will be the path to the directory that site sits in. So, if you enter tyler as the Site Address value, that the site can be reached at http://multisite.longren.org/tyler/. The settings that I used to set up multisite.longren.org/tyler/ can be seen in the following screenshot. You'll probably want to add a couple of sites just so you get a good idea of what your blog directory page will look like. Example individual site setup Now we can set up the actual blog directory page. On the main dashboard (that is, /wp-admin/index.php), mouse over the Pages menu item on the left-hand side of the page and then click on Add New to create a new page. I usually name this page Home, as I use the blog directory as the first page that visitors see when visiting the site. From there, visitors can choose which blog they want to visit and are also shown a list of the most recent posts from each blog. There's no need to enter any content on the page, unless you want to. The important part is selecting the Blog Directory template. Before you publish your new Home / blog directory page, make sure that you select Blog Directory as the Template value in the Page Attributes section. An example a Home / blog directory page can be seen in the following screenshot: Example Home / blog directory page setup Once you've got your page looking like the example, as shown in the previous screenshot, you can go ahead and publish that page. The Update button in the previous screenshot will say Publish if you've not yet published the page. Next you'll want to set the newly created Home / blog directory page as the front page for the site. To do this, mouse over the Settings menu option on the left-hand side of the page and then click on Reading. For the Front page displays value, check A static page (select below). Previously, Your latest posts was checked. Then for the Front Page drop-down menu, just select the Home page that we just created and click on the Save Changes button at the bottom of the page. I usually don't set anything for the Posts page drop-down menu because I never post to the "parent" site. If you do intend to make posts on the parent site, I'd suggest that you create a new blank page titled Posts and then select that page as your Posts page. The reading settings I use at multisite.longren.org can be as shown in the following screenshot: Reading settings setup After you've saved your reading settings, open up your parent site in your browser and you should see something similar to what I showed in the Blog directory example screenshot. Again, there's no need for you to keep the exact setup that I've used in the example blog-directory.php file. You can give that any style/design that you want. You can rearrange the various pieces on the page as you prefer. You should probably have a decent working knowledge of HTML and CSS to accomplish this, however. You should have a basic blog directory at this point. If you have any experience with PHP, HTML, and CSS, you can probably extend this basic code and do a whole lot more with it. The number of plugins is astounding and they are of very good quality, generally. And I think Automattic has done great things for WordPress in general. No other CMS can claim to have anything like the number of plugins that WordPress does. Summary You should be able to effectively manage themes and plugins in a multisite installation now. If you set the code up, you've got a directory showcasing network member content and, more importantly, know how to set up and customize a WordPress child theme now. Resources for Article : Further resources on this subject: Customization using ADF Meta Data Services [Article] Overview of Microsoft Dynamics CRM 2011 [Article] Customizing an Avatar in Flash Multiplayer Virtual Worlds [Article]
Read more
  • 0
  • 0
  • 2700
article-image-chef-infrastructure
Packt
05 Sep 2013
10 min read
Save for later

Chef Infrastructure

Packt
05 Sep 2013
10 min read
(For more resources related to this topic, see here.) First, let's talk about the terminology used in the Chef universe. A cookbook is a collection of recipes – codifying the actual resources, which should be installed and configured on your node – and the files and configuration templates needed. Once you've written your cookbooks, you need a way to deploy them to the nodes you want to provision. Chef offers multiple ways for this task. The most widely used way is to use a central Chef Server. You can either run your own or sign up for Opscode's Hosted Chef. The Chef Server is the central registry where each node needs to get registered. The Chef Server distributes the cookbooks to the nodes based on their configuration settings. Knife is Chef's command-line tool called to interact with the Chef Server. You use it for uploading cookbooks and managing other aspects of Chef. On your nodes, you need to install Chef Client – the part that retrieves the cookbooks from the Chef Server and executes them on the node. In this article, we'll see the basic infrastructure components of your Chef setup at work and learn how to use the basic tools. Let's get started with having a look at how to use Git as a version control system for your cookbooks. Using version control Do you manually back up every file before you change it? And do you invent creative filename extensions like _me and _you when you try to collaborate on a file? If you answer yes to any of the preceding questions, it's time to rethink your process. A version control system (VCS) helps you stay sane when dealing with important files and collaborating on them. Using version control is a fundamental part of any infrastructure automation. There are multiple solutions (some free, some paid) for managing source version control including Git, SVN, Mercurial, and Perforce. Due to its popularity among the Chef community, we will be using Git. However, you could easily use any other version control system with Chef. Getting ready You'll need Git installed on your box. Either use your operating system's package manager (such as Apt on Ubuntu or Homebrew on OS X), or simply download the installer from www.git-scm.org. Git is a distributed version control system. This means that you don't necessarily need a central host for storing your repositories. But in practice, using GitHub as your central repository has proven to be very helpful. In this article, I'll assume that you're using GitHub. Therefore, you need to go to github.com and create a (free) account to follow the instructions given in this article. Make sure that you upload your SSH key following the instructions at https://help.github.com/articles/generating-ssh-keys, so that you're able to use the SSH protocol to interact with your GitHub account. As soon as you've created your GitHub account, you should create your repository by visiting https://github.com/new and using chef-repo as the repository name. How to do it... Before you can write any cookbooks, you need to set up your initial Git repository on your development box. Opscode provides an empty Chef repository to get you started. Let's see how you can set up your own Chef repository with Git using Opscode's skeleton. Download Opscode's skeleton Chef repository as a tarball: mma@laptop $ wget http://github.com/opscode/chef-repo/tarball/master...TRUNCATED OUTPUT...2013-07-05 20:54:24 (125 MB/s) - 'master' saved [9302/9302] Extract the downloaded tarball: mma@laptop $ tar zvf master Rename the directory. Replace 2c42c6a with whatever your downloaded tarball contained in its name: mma@laptop $ mv opscode-chef-repo-2c42c6a/ chef-repo Change into your newly created Chef repository: mma@laptop $ cd chef-repo/ Initialize a fresh Git repository: mma@laptop:~/chef-repo $ git init .Initialized empty Git repository in /Users/mma/work/chef-repo/.git/ Connect your local repository to your remote repository on github.com. Make sure to replace mmarschall with your own GitHub username: mma@laptop:~/chef-repo $ git remote add origin git@github.com:mmarschall/chef-repo.git Add and commit Opscode's default directory structure: mma@laptop:~/chef-repo $ git add .mma@laptop:~/chef-repo $ git commit -m "initial commit"[master (root-commit) 6148b20] initial commit10 files changed, 339 insertions(+), 0 deletions(-)create mode 100644 .gitignore...TRUNCATED OUTPUT...create mode 100644 roles/README.md Push your initialized repository to GitHub. This makes it available to all your co-workers to collaborate on it. mma@laptop:~/chef-repo $ git push -u origin master...TRUNCATED OUTPUT...To git@github.com:mmarschall/chef-repo.git* [new branch] master -> master How it works... You've downloaded a tarball containing Opscode's skeleton repository. Then, you've initialized your chef-repo and connected it to your own repository on GitHub. After that, you've added all the files from the tarball to your repository and committed them. This makes Git track your files and the changes you make later. As a last step, you've pushed your repository to GitHub, so that your co-workers can use your code too. There's more... Let's assume you're working on the same chef-repo repository together with your co-workers. They cloned your repository, added a new cookbook called other_cookbook, committed their changes locally, and pushed their changes back to GitHub. Now it's time for you to get the new cookbook down to your own laptop. Pull your co-workers, changes from GitHub. This will merge their changes into your local copy of the repository. mma@laptop:~/chef-repo $ git pull From github.com:mmarschall/chef-repo * branch master -> FETCH_HEAD ...TRUNCATED OUTPUT... create mode 100644 cookbooks/other_cookbook/recipes/default.rb In the case of any conflicting changes, Git will help you merge and resolve them. Installing Chef on your workstation If you want to use Chef, you'll need to install it on your local workstation first. You'll have to develop your configurations locally and use Chef to distribute them to your Chef Server. Opscode provides a fully packaged version, which does not have any external prerequisites. This fully packaged Chef is called the Omnibus Installer. We'll see how to use it in this section. Getting ready Make sure you've curl installed on your box by following the instructions available at http://curl.haxx.se/download.html. How to do it... Let's see how to install Chef on your local workstation using Opscode's Omnibus Chef installer: In your local shell, run the following command: mma@laptop:~/chef-repo $ curl -L https://www.opscode.com/chef/install.sh | sudo bashDownloading Chef......TRUNCATED OUTPUT...Thank you for installing Chef! Add the newly installed Ruby to your path: mma@laptop:~ $ echo 'export PATH="/opt/chef/embedded/bin:$PATH"'>> ~/.bash_profile && source ~/.bash_profile How it works... The Omnibus Installer will download Ruby and all the required Ruby gems into /opt/chef/embedded. By adding the /opt/chef/embedded/bin directory to your .bash_profile, the Chef command-line tools will be available in your shell. There's more... If you already have Ruby installed in your box, you can simply install the Chef Ruby gem by running mma@laptop:~ $ gem install chef. Using the Hosted Chef platform If you want to get started with Chef right away (without the need to install your own Chef Server) or want a third party to give you an Service Level Agreement (SLA) for your Chef Server, you can sign up for Hosted Chef by Opscode. Opscode operates Chef as a cloud service. It's quick to set up and gives you full control, using users and groups to control the access permissions to your Chef setup. We'll configure Knife, Chef's command-line tool to interact with Hosted Chef, so that you can start managing your nodes. Getting ready Before being able to use Hosted Chef, you need to sign up for the service. There is a free account for up to five nodes. Visit http://www.opscode.com/hosted-chef and register for a free trial or the free account. I registered as the user webops with an organization short-name of awo. After registering your account, it is time to prepare your organization to be used with your chef-repo repository. How to do it... Carry out the following steps to interact with the Hosted Chef: Navigate to http://manage.opscode.com/organizations. After logging in, you can start downloading your validation keys and configuration file. Select your organization to be able to see its contents using the web UI. Regenerate the validation key for your organization and save it as <your-organization-short-name>.pem in the .chef directory inside your chef-repo repository. Generate the Knife config and put the downloaded knife.rb into the .chef directory inside your chef-repo directory as well. Make sure you replace webops with the username you chose for Hosted Chef and awo with the short-name you chose for your organization: current_dir = File.dirname(__FILE__)log_level :infolog_location STDOUTnode_name "webops"client_key "#{current_dir}/webops.pem"validation_client_name "awo-validator"validation_key "#{current_dir}/awo-validator.pem"chef_server_url "https://api.opscode.com/organizations/awo"cache_type 'BasicFile'cache_options( :path => "#{ENV['HOME']}/.chef/checksums" )cookbook_path ["#{current_dir}/../cookbooks"] Use Knife to verify that you can connect to your hosted Chef organization. It should only have your validator client so far. Instead of awo, you'll see your organization's short-name: mma@laptop:~/chef-repo $ knife client listawo-validator How it works... Hosted Chef uses two private keys (called validators): one for the organization and the other for every user. You need to tell Knife where it can find these two keys in your knife.rb file. The following two lines of code in your knife.rb file tells Knife about which organization to use and where to find its private key: validation_client_name "awo-validator"validation_key "#{current_dir}/awo-validator.pem" The following line of code in your knife.rb file tells Knife about where to find your users' private key: client_key "#{current_dir}/webops.pem" And the following line of code in your knife.rb file tells Knife that you're using Hosted Chef. You will find your organization name as the last part of the URL: chef_server_url "https://api.opscode.com/organizations/awo" Using the knife.rb file and your two validators Knife can now connect to your organization hosted by Opscode. You do not need your own, self-hosted Chef Server, nor do you need to use Chef Solo in this setup. There's more... This setup is good for you if you do not want to worry about running, scaling, and updating your own Chef Server and if you're happy with saving all your configuration data in the cloud (under Opscode's control). If you need to have all your configuration data within your own network boundaries, you might sign up for Private Chef, which is a fully supported and enterprise-ready version of Chef Server. If you don't need any advanced enterprise features like role-based access control or multi-tenancy, then the open source version of Chef Server might be just right for you. Summary In this article, we learned about key concepts such as cookbooks, roles, and environments and how to use some basic tools such as Git, Knife, Chef Shell, Vagrant, and Berkshelf. Resources for Article: Further resources on this subject: Automating the Audio Parameters – How it Works [Article] Skype automation [Article] Cross-browser-distributed testing [Article]
Read more
  • 0
  • 0
  • 2688

article-image-working-data-application-components-sql-server-2008-r2
Packt
19 Feb 2010
4 min read
Save for later

Working with Data Application Components in SQL Server 2008 R2

Packt
19 Feb 2010
4 min read
(For more resources on Microsoft, see here.) A Data Application Component is an entity that integrates all data tier related objects used in authoring, deploying and managing into a single unit instead of working with them separately. Programmatically DACs belong to classes that are found in The Microsoft.SqlServer.Management.Dac namespace. DACs are stored in a DacStore and managed centrally. Dacs can be authored and built using SQL Server Data-Tier Application templates in VS2010 (now in Beta 2) or using SQL Server Management Studio. This article describes creating DAC using SQL Server 2008 R2 Nov-CTP(R2 server in this article), a new feature in this version. Overview of the article In order to proceed working with this example you need to download SQL Server 2008 R2 Nov-CTP. The ease with which this installs depend on the Windows OS on your machine. I had encountered problems installing it on my Windows XP SP3 where only partial files were installed. On Windows 7 Ultimate it installed very easily. This article uses the R2 Server installed on Windows 7 Ultimate. You can download the R2 Server from this link after registering at the site. Download the x 32 version, a 1.2 GB file. In order to work with Data Tier Applications in Visual Studio you need to install Visual Studio 2010 now in Beta 2. If you have installed Beta 1, take it out (use Add/Remove programs) before you install Beta 2. You would create a Database Project as shown in the next figure. In the following sections we will look at how to extract a DAC from a existing Database using tools in SSMS and R2 Server. This will be followed by deploying the DAC to a SQL Server 2008 (before R2 Version). In a future article we will see how to create and work with the DACs in Visual Studio. Extracting a DAC We will use the Extract a Data-Tier Application wizard to create a DAC file. Connect to the SQL Server 2008 Server in the Management Studio as shown. We will create a DAC package that will create a DAC file for us on completing this task. Right click the Pubs database and click on Tasks | Extract Data-Tier Appplication... You may also use any other database for working with this exercise. This brings up the Wizard as shown in the next figure. Read the notes on this window and review the database icons on this window. Click Next. The Set Properties page of the wizard gets displayed. The Application name will be the database name with which you started. You can change it if you like. The package file name will reflect the application name. The version is set at 1.0.0.0., but you may specify any version you like. You can create different DACs with different version numbers. Click Next. The program cranks up and after validation the Validation & Summary page gets displayed as shown. The file path of the package, the name of the package and the DAC objects that got into the package are all shown here. All database objects (Tables, Views, Stored Procedures etc) are included in the package. Click Save Report button to save the packaging info to a file. This saves the HTML file ExtractDACSummary_HODENTEK3_ pubs_20100125 to the SQL Server Management Studio folder. This report shows what objects were validated during this process as shown. Click Next. The Build the Package opens up and after the build process is completed you will be able to save the package as shown in the next picture. At the package location shown earlier you will see a package object as shown. This file can be Unpacked to a destination as well as opened with Microsoft SQL Server DAC Package File Unpack wizard. These actions can be accessed by making a right click on this package file.
Read more
  • 0
  • 0
  • 2686
Modal Close icon
Modal Close icon