Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-packing-everything-together
Packt
22 Aug 2013
13 min read
Save for later

Packing Everything Together

Packt
22 Aug 2013
13 min read
(For more resources related to this topic, see here.) Creating a package When you are distributing your extensions, often, the problem you are helping your customer solve cannot be achieved with a single extension, it actually requires multiple components, modules, and plugins that work together. Rather than making the user install all of these extensions manually one by one, you can package them all together to create a single install package. Our click-to-call plugin and folio component go together nicely, so let's package them together. Create a folder named pkg_folio_v1.0.0 on your desktop, and within it, create a folder named packages. Copy into the packages folder the latest version of com_folio and plg_content_clicktocall, for example, com_folio_v2.7.0.zip and plg_content_clicktocall_v1.2.0.zip. Now create a file named pkg_folio.xml in the root of the pkg_folio_v1.0.0 folder, and add the following code to it: <?xml version="1.0" encoding="UTF-8" ?> <extension type="package" version="3.0"> <name>Folio Package</name> <author>Tim Plummer</author> <creationDate>May 2013</creationDate> <packagename>folio</packagename> <license>GNU GPL</license> <version>1.0.0</version> <url>www.packtpub.com</url> <packager>Tim Plummer</packager> <packagerurl>www.packtpub.com</packagerurl> <description>Single Install Package combining Click To Call plugin with Folio component</description> <files folder="packages"> <file type="component" id="folio" >com_folio_v2.7.0.zip</file> <file type="plugin" id="clicktocall" group="content">plg_content_clicktocall_v1.2.0.zip</file> </files> </extension> This looks pretty similar to our installation XML file that we created for each component; however, there are a few differences. Firstly, the extension type is package: <extension type="package" version="3.0"> We have some new tags that help us to describe what this package is and who made it. The person creating the package may be different to the original author of the extensions: <packagename>folio</packagename><packager>Tim Plummer</packager><packagerurl>www.packtpub.com</packagerurl> You will notice that we are looking for our extensions in the packages folder; however, this could potentially have any name you like: <files folder="packages"> For each extension, we need to say what type of extension it is, what its name is, and the file containing it: <file type="component" id="folio" >com_folio_v2.7.0.zip</file> You can package together as many components, modules, and plugins as you like, but be aware that some servers have a maximum size for uploaded files that is quite low, so, if you try to package too much together, you may run into problems. Also, you might get timeout issues if the file is too big. You'll avoid most of these problems if you keep the package file under a couple of megabytes. You can install packages via Extension Manager in the same way you install any other Joomla! extension: However, you will notice that the package is listed in addition to all of the individual extensions within it: Setting up an update server Joomla! has a built-in update software that allows you to easily update your core Joomla! version, often referred to as one-click updates (even though it usually take a few clicks to launch it). This update mechanism is also available to third-party Joomla! extensions; however, it involves you setting up an update server. You can try this out on your local development environment. To do so, you will need two Joomla! sites: http://localhost/joomla3, which will be our update server, and http://localhost/joomlatest, which will be our site that we are going to try to update the extensions on. Note that the update server does not need to be a Joomla! site; it could be any folder on a web server. Install our click-to-call plugin on the http://localhost/joomlatest site, and make sure it's enabled and working. To enable the update manager to be able to check for updates, we need to add some code to the clicktocall.xml installation XML file under /plugins/content/clicktocall/: <?xml version="1.0" encoding="UTF-8"?> <extension version="3.0" type="plugin" group="content" method="upgrade"> <name>Content - Click To Call</name> <author>Tim Plummer</author> <creationDate>April 2013</creationDate> <copyright>Copyright (C) 2013 Packt Publishing. All rights reserved.</copyright> <license>http://www.gnu.org/licenses/gpl-3.0.html</license> <authorEmail>example@packtpub.com</authorEmail> <authorUrl>http://packtpub.com</authorUrl> <version>1.2.0</version> <description>This plugin will replace phone numbers with click to call links. Requires Joomla 3.0 or greater. Don't forget to publish this plugin! </description> <files> <filename plugin="clicktocall">clicktocall.php</filename> <filename plugin="clicktocall">index.html</filename> </files> <languages> <language tag="en-GB">language/en-GB/en-GB.plg_content_clicktocall.ini</language> </languages> <config> <fields name="params"> <fieldset name="basic"> <field name="phoneDigits1" type="text" default="4" label="PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS1_LABEL" description="PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS1_DESC" /> <field name="phoneDigits2" type="text" default="4" label="PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS2_LABEL" description="PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS2_DESC" /> </fieldset> </fields> </config> <updateservers> <server type="extension" priority="1" name="Click To Call Plugin Updates">http://localhost/joomla3/updates/clicktocall.xml</server> </updateservers> </extension> The type can either be extension or collection; in most cases you'll be using extension, which allows you to update a single extension, as opposed to collection, which allows you to update multiple extensions via a single file: type="extension" When you have multiple update servers, you can set a different priority for each, so you can control the order in which the update servers are checked. If the first one is available, it won't bother checking the rest: priority="1" The name attribute describes the update server; you can put whatever value you like in here: name="Click To Call Plugin Updates" We have told the extension where it is going to check for updates, in this case http://localhost/joomla3/updates/clicktocall.xml. Generally, this should be a publically accessible site so that users of your extension can check for updates. Note that you can specify multiple update servers for redundancy. Now on your http://localhost/joomla3 site, create a folder named updates and put the usual index.html file in it. Copy it in the latest version of your plugin, for example, plg_content_clicktocall_v1.2.1.zip. You may wish to make a minor visual change so you can see if the update actually worked. For example, you could edit the en-GB.plg_content_clicktocall.ini language file under /language/en-GB/, then zip it all back up again. PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS1_LABEL="Digits first part"PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS1_DESC="How many digits inthe first part of the phone number?"PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS2_LABEL="Digits last part"PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS2_DESC="How many digits inthe second part of the phone number?" Now create the clicktocall.xml file with the following code in your updates folder: <?xml version="1.0" encoding="utf-8"?> <updates> <update> <name>Content - Click To Call</name> <description>This plugin will replace phone numbers with click to call links. Requires Joomla 3.0 or greater. Don't forget to publish this plugin! </description> <element>clicktocall</element> <type>plugin</type> <folder>content</folder> <client>0</client> <version>1.2.1</version> <infourl title="Click To Call Plugin 1.2.1">http://packtpub.com</infourl> <downloads> <downloadurl type="full" format="zip">http://localhost/joomla3/updates/plg_content_clicktocall_v1.2.1.zip</downloadurl> </downloads> <targetplatform name="joomla" version="3.1" /> </update> </updates> This file could be called anything you like, it does not need to be the extensionname.xml as long as it matches the name you set in your installation XML for the extension. The updates tag surrounds all the update elements. Each time you release a new version, you will need to create another update section. Also, if your extension supports both Joomla! 2.5 and Joomla! 3, you will need to have separate <update> definitions for each version. And if you want to support updates for both Joomla! 3.0 and Joomla! 3.1, you will need separate tags for each of them. The value of the name tag is shown in the Extension Manager Update view, so using the same name as your extension should avoid confusion: <name>Content - Click To Call</name> The value of the description tag is shown when you hover over the name in the update view. The value of the element tag is the installed name of the extension. This should match the value in the element column in the jos_extensions table in your database: <element>clicktocall</element> The value of the type tag describes whether this is a component, module, or a plugin: <type>plugin</type> The value of the folder tag is only required for plugins, and describes the type of plugin this is, in our case a content plugin. Depending on your plugin type, this may be system, search, editor, user, and so on. <folder>content</folder> The value of the client tag describes the client_id in the jos_extensions table, which tells Joomla! if this is a site (0) or an administrator (1) extension type. Plugins will always be 0, components will always be 1; however, modules could vary depending on whether it's a frontend or a backend module: <client>0</client> Plugins must have <folder> and <client> elements, otherwise the update check won't work. The value of the version tag is the version number for this release. This version number needs to be higher than the currently installed version of the extension for available updates to be shown: <version>1.2.1</version> The the infourl tag is optional, and allows you to show a link to information about the update, such as release notes: <infourl title="Click To Call Plugin 1.2.1">http://packtpub.com</infourl> The downloads tag shows all of the available download locations for the update. The value of the Downloadurl tag is the URL to download the extension from. This file could be located anywhere you like, it does not need to be in the updates folder on the same site. The type attribute describes whether this is a full package or an update, and the format attribute defines the package type such as zip or tar: <downloadurl type="full" format="zip">http://localhost/joomla3/updates/plg_content_clicktocall_v1.2.1.zip</downloadurl> The targetplatform tag describes the Joomla! version this update is meant for. The value of the name attribute should always be set to joomla. If you want to target your update to a specific Joomla! version, you can use min_dev_level and max_dev_level in here, but in most cases you'd want your update to be available for all Joomla! versions in that Joomla! release. Note that min_dev_level and max_dev_level are only available in Joomla! 3.1 or higher. <targetplatform name="joomla" version="3.1" /> So, now you should have the following files in your http://localhost/joomla3/updates folder. clicktocall.xmlindex.htmlplg_content_clicktocall_v1.2.1.zip You can make sure the XML file works by typing the full URL http://localhost/joomla3/updates/clicktocall.xml: As the update server was not defined in our extension when we installed it, we need to manually add an entry to the jos_update_sites table in our database before the updates will work. So, now go to your http://localhost/joomlatest site and log in to the backend. From the menu navigate to Extensions | Extension Manager, and then click on the Update menu on the left-hand side. Click on the Find Updates button, and you should now see the update, which you can install: Select the Content – Click To Call update and press the Update button, and you should see the successful update message: And if all went well, you should now see the visual changes that you made to your plugin. These built-in updates are pretty good, so why doesn't every extension developer use them? They work great for free extensions, but there is a flaw that prevents many extension developers using this; there is no way to authenticate the user when they are updating. Essentially, what this means is that anyone who gets hold of your extension or knows the details of your update server can get ongoing free updates forever, regardless of whether they have purchased your extension or are an active subscriber. Many commercial developers have either implemented their own update solutions, or don't bother using the update manager, as their customers can install new versions via extension manager over the top of previous versions. This approach although is slightly inconvenient for the end user, it is easier for the developer to control the distribution. One such developer who has come up with his own solution to this, is Nicholas K. Dionysopoulos from Akeeba, and he has kindly shared his solution, the Akeeba Release System, which you can get for free from his website and easily integrate into your own extensions. As usual, Nicholas has excellent documentation that you can read if you are interested, but it's beyond the scope of this book to go into detail about this alternative solution (https://www.akeebabackup.com/products/akeeba-release-system.html). Summary Now you know how to package up your extensions and get them ready for distribution. You learnt how to set up an update server, so now you can easily provide your users with the latest version of your extensions. Resources for Article: Further resources on this subject: Tips and Tricks for Joomla! Multimedia [Article] Adding a Random Background Image to your Joomla! Template [Article] Showing your Google calendar on your Joomla! site using GCalendar [Article]
Read more
  • 0
  • 0
  • 1749

article-image-creating-sheet-objects-and-starting-new-list-using-qlikview-11
Packt
20 Aug 2013
6 min read
Save for later

Creating sheet objects and starting new list using Qlikview 11

Packt
20 Aug 2013
6 min read
(For more resources related to this topic, see here.) How it works... To add the list box for a company, right-click in the blank area of the sheet, and choose New Sheet Object | List Box as shown in the following screenshot: As you can see in the drop-down menu, there are multiple types of sheet objects to choose from such as List Box, Statistics Box, Chart, Input Box, Current Selections Box, Multi Box, Table Box, Button, Text Object, Line/Arrow Object, Slider/Calendar Object, and Bookmark Object. We will only cover a few of them in the course of this article. The Help menu and extended examples that are available on the QlikView website will allow you to explore ideas beyond the scope of this article. The Help documentation for any item can be obtained by using the Help menu present on the top menu bar. Choose the List Box sheet object to add the company dimension to our analysis. The New List Box wizard has eight tabs: General, Expressions, Sort, Presentation, Number, Font, Layout, and Caption, as shown in the following screenshot: Give the new List Box the title Company. The Object ID will be system generated. We choose the Company field from the fields available in the datafile that we loaded. We can check the Show Frequency box to show frequency in percent, which will only tell us how many account lines in October were loaded for each company. In the Expressions tab, we can add formulas for analyzing the data. Here, click on Add and choose Average. Since, we only have numerical data in the Amount field, we will use the Average aggregation for the Amount field. Don't forget to click on the Paste button to move your expression into the expression checker. The expression checker will tell you if the expression format is valid or if there is a syntax problem. If you forget to move your expression into the expression checker with the Paste button, the expression will not be saved and will not appear in your application. The Sort tab allows you to change the Sort criteria from text to numeric or dates. We will not change the Sort criteria here. The Presentation tab allows you to adjust things such as column or row header wrap, cell borders, and background pictures. The Number tab allows us to override the default format to tell the sheet to format the data as money, percentage, or date for example. We will use this tab on our table box currently labeled Sum(Amount) to format the amount as money after we have finished creating our new company list box. The Font tab lets us choose the font that we want to use, its display size, and whether to make our font bold. The Layout tab allows us to establish and apply themes, and format the appearance of the sheet object, in this case, the list box. The Caption tab further formats the sheet object and, in the case of the list box, allows you to choose the icons that will appear in the top menu of the list box so that we can use those icons to select and clear selections in our list box. In this example, we have selected search, select all, and clear. We can see that the percentage contribution to the amount and the average amount is displayed in our list box. Now, we need to edit our straight table sheet object along with the amount. Right-click on the straight table sheet object and choose Properties from the pop-up menu. In the General tab, give the table a suitable name. In this case, use Sum of Accounts. Then move over to the Number tab and choose Money for the number format. Click on Apply to immediately apply the number format, and click on OK to close the wizard. Now our straight table sheet object has easier to read dollar amounts. One of the things we notice immediately in our analysis is that we are out of balance by one dollar and fifty-nine cents, as shown in the following screenshot: We can analyze our data just using the list boxes, by selecting a company from the Company list and seeing which account groups and which cost centers are included (white) and which are excluded (gray). Our selected Company shows highlighted in green: By selecting Cheyenne Holding, we can see that it is indeed a holding company and has no manufacturing groups, sales accounting groups, or cost centers. Also the company is in balance. But what about a more graphic visual analysis? To create a chart to further visualize and analyze our data, we are going to create a new sheet object. This time we are going to create a bar chart so that we can see various company contributions to administrative costs or sales by the Acct.5 field, and the account number. Just as when we created the company list box, we right-click on the sheet and choose New Sheet Object | Chart. This opens the following Chart Properties wizard for us: We follow the steps through the chart wizard by giving the chart a name, selecting the chart type, and the dimensions we want to use. Again our expression is going to be SUM(Amount), but we will use the Label option and name it Total Amount in the Expression tab. We have selected the Company and Acct.5 dimensions in the Dimension tab, and we take the defaults for the rest of the wizard tabs. When we close the wizard, the new bar chart appears on our sheet, and we can continue our analysis. In the following screenshot, we have chosen Cheyenne Manufacturing for our Company and all Sales/COS Trade to Mexico Branch as Account Groups. These two selection then show us in our straight table the cost centers that are associated with sales/COS trade to Mexico branch. In our bar chart, we see the individual accounts associated with sales/COS trade to Mexico branch and Cheyenne Manufacturing along with the related amounts posted for these accounts. Summary We created more sheet objects, started with a new list box to begin analyzing our loaded data. We alson added dimensions for analysis. Resources for Article: Further resources on this subject: Meet QlikView [Article] Linking Section Access to multiple dimensions [Article] Creating the first Circos diagram [Article]
Read more
  • 0
  • 0
  • 3593

article-image-highcharts
Packt
20 Aug 2013
5 min read
Save for later

Highcharts

Packt
20 Aug 2013
5 min read
(For more resources related to this topic, see here.) Creating a line chart with a time axis and two Y axes We will now create the code for this chart: You start the creation of your chart by implementing the constructor of your Highcharts' chart: var chart = $('#myFirstChartContainer').highcharts({}); We will now set the different sections inside the constructor. We start by the chart section. Since we'll be creating a line chart, we define the type element with the value line. Then, we implement the zoom feature by setting the zoomType element. You can set the value to x, y, or xy depending on which axes you want to be able to zoom. For our chart, we will implement the possibility to zoom on the x-axis: chart: {type: 'line',zoomType: 'x'}, We define the title of our chart: title: {text: 'Energy consumption linked to the temperature'}, Now, we create the x axis. We set the type to datetime because we are using time data, and we remove the title by setting the text to null. You need to set a null value in order to disable the title of the xAxis: xAxis: {type: 'datetime',title: {text: null}}, We then configure the Y axes. As defined, we add two Y axes with the titles Temperature and Electricity consumed (in KWh), which we override with a minimum value of 0. We set the opposite parameter to true for the second axis in order to have the second y axis on the right side: yAxis: [{title: {text: 'Temperature'},min:0},{title: {text: 'Energy consumed (in KWh)'},opposite:true,min:0}], We will now customize the tooltip section. We use the crosshairs option in order to have a line for our tooltip that we will use to follow values of both series. Then, we set the shared value to true in order to have values of both series on the same tooltip. tooltip: {crosshairs: true,shared: true}, Further, we set the series section. For the datetime axes, you can set your series section by using two different ways. You can use the first way when your data follow a regular time interval and the second way when your data don't necessarily follow a regular time interval. We will use both the ways by setting the two series with two different options. The first series follows a regular interval. For this series, we set the pointInterval parameter where we define the data interval in milliseconds. For our chart, we set an interval of one day. We set the pointStart parameter with the date of the first value. We then set the data section with our values. The tooltip section is set with the valueSuffix element, where we define the suffix to be added after the value inside our tool tip. We set our yAxis element with the axis we want to associate with our series. Because we want to set this series to the first axis, we set the value to 0(zero). For the second series, we will use the second way because our data is not necessarily following the regular intervals. But you can also use this way, even if your data follows a regular interval. We set our data by couple, where the first element represents the date and the second element represents the value. We also override the tooltip section of the second series. We then set the yAxis element with the value 1 because we want to associate this series to the second axis. For your chart, you can also set your date values with a timestamp value instead of using the JavaScript function Date.UTC. series: [{name: 'Temperature',pointInterval: 24 * 3600 * 1000,pointStart: Date.UTC(2013, 0, 01),data: [17.5, 16.2, 16.1, 16.1, 15.9, 15.8, 16.2],tooltip: {valueSuffix: ' °C'},yAxis: 0},{name: 'Electricity consumption',data: [[Date.UTC(2013, 0, 01), 8.1],[Date.UTC(2013, 0, 02), 6.2],[Date.UTC(2013, 0, 03), 7.3],[Date.UTC(2013, 0, 05), 7.1],[Date.UTC(2013, 0, 06), 12.3],[Date.UTC(2013, 0, 07), 10.2]],tooltip: {valueSuffix: ' KWh'},yAxis: 1}] You should have this as the final code: $(function () {var chart = $(‘#myFirstChartContainer’).highcharts({chart: {type: ‘line’,zoomType: ‘x’},title: {text: ‘Energy consumption linked to the temperature’},xAxis: {type: ‘datetime’,title: {text: null}},yAxis: [{title: {text: ‘Temperature’},min:0},{title: {text: ‘Electricity consumed’},opposite:true,min:0}],tooltip: {crosshairs: true,shared: true},series: [{name: ‘Temperature’,pointInterval: 24 * 3600 * 1000,pointStart: Date.UTC(2013, 0, 01),data: [17.5, 16.2, 16.1, 16.1, 15.9, 15.8, 16.2],tooltip: {valueSuffix: ‘ °C’},yAxis: 0},{name: ‘Electricity consumption’,data: [[Date.UTC(2013, 0, 01), 8.1],[Date.UTC(2013, 0, 02), 6.2],[Date.UTC(2013, 0, 03), 7.3],[Date.UTC(2013, 0, 05), 7.1],[Date.UTC(2013, 0, 06), 12.3],[Date.UTC(2013, 0, 07), 10.2]],tooltip: {valueSuffix: ‘ KWh’},yAxis: 1}]});}); You should have the expected result as shown in the following screenshot: Summary In this article, we learned how to perform a task with the most important features of Highcharts. We created a line chart with a time axis and two Y-axes and realized that there are a wide variety of things that you can do with it. Also, we learned about the most commonly performed tasks and most commonly used features in Highcharts. Resources for Article : Further resources on this subject: Converting tables into graphs (Advanced) [Article] Line, Area, and Scatter Charts [Article] Data sources for the Charts [Article]
Read more
  • 0
  • 0
  • 3155

article-image-working-remote-data
Packt
20 Aug 2013
4 min read
Save for later

Working with remote data

Packt
20 Aug 2013
4 min read
(For more resources related to this topic, see here.) Getting ready Create a new document in your editor. How to do it... Copy the following code into your new document: <!DOCTYPE html> <html> <head> <title>Kendo UI Grid How-to</title> <link rel="stylesheet" type="text/css" href="kendo/styles/kendo.common.min.css"> <link rel="stylesheet" type="text/css" href="kendo/styles/kendo.default.min.css"> <script src = "kendo/js/jquery.min.js"></script> <script src = "kendo/js/kendo.web.min.js"></script> </head> <body> <h3 style="color:#4f90ea;">Exercise 12- Working with Remote Data</h3> <p><a href="index.html">Home</a></p> <script type="text/javascript"> $(document).ready(function () { var serviceURL = "http://gonautilus.com/kendogen/KENDO.cfc?method="; var myDataSource = new kendo.data.DataSource({ transport: { read: { url: serviceURL + "getArt", dataType: "JSONP" } }, pageSize: 20, schema: { model: { id: "ARTISTID", fields: { ARTID: { type: "number" }, ARTISTID: { type: "number" }, ARTNAME: { type: "string" }, DESCRIPTION: { type: "CLOB" }, PRICE: { type: "decimal" }, LARGEIMAGE: { type: "string" }, MEDIAID: { type: "number" }, ISSOLD: { type: "boolean" } } } } } ); $("#myGrid").kendoGrid({ dataSource: myDataSource, pageable: true, sortable: true, columns: [ { field: "ARTID", title: "Art ID"}, { field: "ARTISTID", title: "Artist ID"}, { field: "ARTNAME", title: "Art Name"}, { field: "DESCRIPTION", title: "Description"}, { field: "PRICE", title: "Price", template: '#= kendo.toString(PRICE,"c") #'}, { field: "LARGEIMAGE", title: "Large Image"}, { field: "MEDIAID", title: "Media ID"}, { field: "ISSOLD", title: "Sold"}] } ); } ); </script> <div id="myGrid"></div> </body> </html> How it works... This example shows you how to access a JSONP remote datasource. JSONP allows you to work with cross-domain remote datasources. The JSONP format is like JSON except it adds padding, which is what the "P" in JSONP stands for. The padding can be seen if you look at the result of the AJAX call being made by the Kendo Grid. It simply responds back with the callback argument that is passed and wraps the JSON in parentheses. You'll notice that we created a serviceURL variable that points to the service we are calling to return our data. On line 19, you'll see that we are calling the getArt method and specifying the value of dataType as JSONP. Everything else should look familiar. There's more... Generally, the most common format used for remote data is JavaScript Object Notation (JSON). You'll find several examples of using ODATA on the Kendo UI demo website. You'll also find examples of performing create, update, and delete operations on that site. Outputting JSON with ASP MVC In an ASP MVC or ASP.NET application, you'll want to set up your datasource like the following example. ASP has certain security requirements that force you to use POST instead of the default GET request when making AJAX calls. ASP also requires that you explicitly define the value of contentType as application/json when requesting JSON. By default, when you create a service as ASP MVC that has JsonResultAction, ASP will nest the JSON data in an element named d: var dataSource = new kendo.data.DataSource({ transport: { read: { type: "POST", url: serviceURL, dataType: "JSON", contentType: "application/json", data: serverData }, parameterMap: function (data, operation) { return kendo.stringify(data); } }, schema: { data: "d" } }); Summary This article discussed about how to work with aggregates with the help of an example of counting the number of items in a column. Resources for Article: Further resources on this subject: Constructing and Evaluating Your Design Solution [Article] Data Manipulation in Silverlight 4 Data Grid [Article] Quick start – creating your first grid [Article]
Read more
  • 0
  • 0
  • 2361

article-image-installing-magento
Packt
19 Aug 2013
22 min read
Save for later

Installing Magento

Packt
19 Aug 2013
22 min read
(For more resources related to this topic, see here.) Installing Magento locally Whether you're working on a Windows computer, Mac, or Linux machine, you will notice very soon that it comes in handy to have a local Magento test environment available. Magento is a complex system and besides doing regular tasks, such as adding products and other content, you should never apply changes to your store directly in the live environment. When you're working on your own a local test system is easy to set up and it gives you the possibility to test changes without any risk. When you're working in a team it makes sense to have a test environment running on your own server or hosting provider. In here, we'll start by explaining how to set up your local test system. Requirements Before we jump into action, it's good to have a closer look at Magento's requirements. What do you need to run it? Simply put, all up-to-date requirements for Magento can be found here: http://www.magentocommerce.com/system-requirements. But maybe that's a bit overwhelming if you are just a beginner. So let's break this up into the most essential stuff: Requirement Notes Operating system: Linux Magento runs best on Linux, as offered by most hosting companies. Don't worry about your local test environment as that will run on Windows or Mac as well. But for your live store you should go in for a Linux solution because if you decide to run it on anything else other than Linux for a live store, it will not be supported. Web server: Apache Magento runs on Versions 1.3.x, 2.0.x, and 2.2.x of this very popular web server. As of Version 1.7 of Magento community and Version 1.12 of Magento Enterprise there's a new web server called Nginx that is compatible as well. Programming language: PHP Magento has been developed using PHP, a programming language which is very popular. Many major open source solutions such as WordPress and Joomla for instance, have been built using PHP. Use Versions 5.2.13 - 5.3.15. Do not use PHP4 anymore, nor use PHP 5.4 yet! PHP extensions Magento requires a number of extensions, which should be available on top of PHP itself. You will need: PDO_MySQL, mcrypt, hash, simplexml, GD, DOM, Iconv, and Curl. Besides that you also need to have the possibility to switch off ''safe mode''. You do not have a clue about all of this? Don't worry. A host offering Magento services already takes care of this. And for your local environment there are only a few additional steps to take. We'll get there in a minute. Database: MySQL MySQL is the database, where Magento will store all data for your store. Use Version 4.1.20 or (and preferably) newer. As you can see, even in a simplified format, there are quite some things that need to be taken care of. Magento hosting is not as simple as hosting for a small WordPress or Joomla! website, currently the most popular open source solutions to create a regular site. The requirements are higher and you just cannot expect to host your store for only a couple of dollars per month. If you do, your online store may still work, but it is likely that you'll run into some performance issues. Be careful with the cheapest hosting solutions. Although Magento may work, you'll be consuming too that need server resources soon. Go for a dedicated server or a managed VPS (Virtual Private Server), but definitely for a host that is advertising support of Magento. Time for action – installing Magento on a Windows machine We'll speak more deeply about Magento hosting later on. Let's first download and install the package on a local Windows machine. Are you a Mac user? Don't worry, we'll give instructions for Mac users as well later on. Note that the following instructions are written for Windows users, but will contain valuable information for Mac users as well. Perform the following steps to install Magento on your Windows computer: Download the Magento installation package. Head over to http://www.magentocommerce.com/download and download the package you need. For a Windows user almost always the full ZIP package is the most convenient one. In our situation Version 1.7.0.2 is the latest one, but please be aware that this will certainly change over time when newer versions are released. You will need to create a (free) account to download the software. This account will also be helpful later on. It will give you access to the Magento support forums, so make sure to store your login details somewhere.The download screen should look something like this: If you're a beginner then it is handy to have some sample data in your store. Magento offers a download package containing sample data on the same page, so download that as well. Note that for a production environment you would never install the sample data, but for a test system like the local installation we're doing here, it might be a good idea to use it. The sample data will create a few items and customers in your store, which will make the learning process easier. Did you notice the links to Magento Go at every download link? Magento Go is Magento's online platform, which you can use out of the box, without doing any installation at all. However, in the remaining part of this article, we assume that you are going to set up your own environment and want to have full control over your store. Next, you need a web server, so that you can run your website locally, on your own machine. On Windows machines, XAMPP is an easy to use all-in-one solution. Download the installer version via: http://www.apachefriends.org/en/xampp-windows.html. XAMPP is also available for Mac and Linux. The download screen is as follows: Once downloaded, run the executable code to start the installation process. You might receive some security warnings that you have to accept, especially when you're using Windows Vista, 7 or 8, like in the following example: Because of this it's best to install XAMPP directly in the root of your hard drive, c:xampp in most cases. Once you click on OK, you will see the following screen, which shows the progress of installation: Once the installation has finished, the software asks if you'd like to start the Control Panel. If you do so, you'll see a number of services that have not been started yet. The minimum that you should start by clicking the Start button are Apache, the web server and MySQL, the database server. Now you're running your own web server on your local computer. Be aware that generally this web server will not be accessible for the outside world. It's running on your local machine, just for testing purposes. Before doing the next step, please verify if your web server is actually running. You can do so by using your browser and going to http://localhost or http://127.0.0.1 If all went well you should see something similar to the following: No result? If you're on a Windows computer, please first reboot your machine. Next, check using the XAMPP control panel if the Apache service is running. If it isn't, try to start it and pay attention to the error messages that appear. Need more help? Start with the help available on XAMPP's website at: http://www.apachefriends.org/en/faq-xampp-windows.html. Can't start the Apache service? Check if there are any other applications using ports 80 and 443. The XAMPP control panel will give you more information. One of the applications that you should for instance stop before starting XAMPP is Skype. It's also possible to change this setting in Skype by navigating to Tools | Options | Advanced | Connections. Change the port number to something else, for instance port 8080. Then close and restart Skype. This prevents the two from interfering with each other in the future. So, the next thing that needs to be done is installing Magento on top of it. But before we do so, we first have to change a few settings. Change the following Windows file: C:WindowsSystem32driversetchosts.Make sure to open your editor using administrator rights, otherwise you will not be able to save your changes. Add the following line to the host file: 127.0.0.1 www.localhost.com. This is needed because Magento will not work correctly on a localhost without this setting. You may use a different name, but the general rule is that at least one dot must be used in the local domain name. The following screenshot gives an example of a possible host file. Please note that every host file will look a bit different. Also, your security software or Windows security settings may prevent you from making changes to this file, so please make sure you have the appropriate rights to change and save its contents: Do you need a text editor? There are really lots of possibilities when it comes to editing text for the web, as long as you use a ''plain text'' editor. Something like Microsoft Word isn't suitable because it will add a lot of unwanted code to your files! For very simple things like the one above, even Notepad would work. But soon you'll notice that it is much more convenient to use an editor that will help you in structuring and formatting your files. Personally, I can recommend the free Notepad++ for Windows users, which is even available in lots of different languages: http://notepad-plus-plus.org. Mac users can have a look at Coda: http://panic.com/coda/ or TextWrangler http://www.barebones.com/products/textwrangler/. Unzip the downloaded Magento package and put all files in a subfolder of your XAMPP installation. This could for instance be c:xampphtdocsmagento. Now, go to www.localhost.com/magento to check if the installation screen of Magento is visible, as shown in the following screenshot. But do not yet start the installation process! Before you start the installation, first create a MySQL database. To do this, use a second browser tab and navigate to localhost | phpMyAdmin. By default the user is root, and so without a password you should be able to continue without logging in. Click on Databases and create a database with a name of your choice. Write it down, as you will need it during the Magento installation. After creating the database you may close the browser tab. It's finally time to start the installation process now. Go back to the installation screen of Magento, accept the license agreement and click on Continue. Next, set your country, Time Zone and Default Currency. If you're working with multiple currencies that will be addressed later on: The next screen is actually the most important one of the installation process and this is where most beginners go wrong because they do not know what values to use. Using XAMPP this is an easy task, however, fill in your Database Name, User Name (root) and do not forget to check the Skip Base URL Validation Before the Next Step box, otherwise your installation might fail: In this same form there are some fields that you can use to immediately improve the security level of your Magento setup. On a local test environment that isn't necessary, so we'll pay attention to those settings later on when we'll discuss installing Magento at a hosting provider. Please note that the Use Secure URLs option should remain unchecked for a local installation like we're doing here. In the last step, yes, really! Just fill out your personal data and chose a username and password. Also in here, since you're working locally you do not have to create a complicated, unique password now. But you know what we mean, right? Doing a live installation at a hosting provider requires a good, strong password! You do not have to fill the Encryption Key field, Magento will do that for you: In the final screen please just make a note of the Encryption Key value that was generated. You might need it in the future whenever upgrading your Magento store to a newer software version: What just happened? Congratulations! You just installed Magento for the very first time! Summarizing it, you just: Downloaded and installed XAMPP Changed your Windows host file Created a MySQL database using PhpMyAdmin Installed Magento I'm on Mac; what should I do? Basically, the steps using XAMPP are a bit different if you're using Mac. We shall be using Mac OS X 10.8 as an example of Mac OS version. According to our experience, as an alternative to XAMPP, MAMP is a bit easier if you are working with Mac. You can find the MAMP software here: http://www.mamp.info/en/downloads/index.html And the documentation for MAMP is available here: http://documentation.mamp.info/en/mamp/installation The good thing about MAMP is that it is easy to install, with very few configuration changes. It will not conflict with any already running Apache installation on your Mac, in case you have any. And it's easy to delete as well; just removing the Mamp folder from your Applications folder is already sufficient to delete MAMP and all local websites running on it. Once you've downloaded the package, it will be in the Downloads folder of your Mac. If you are running Mac OS X 10.8, you first need to set the correct security settings to install MAMP. You can find out which version of Mac OS X you have using the menu option in the top-left corner of your screen: You can find the security settings menu by again going to the Apple menu and then selecting System Preferences: In System Preferences, select the Security & Privacy icon that can be found in the first row as seen in the following screenshot: In here, press the padlock and enter your admin password. Next, select the Anywhere radio button in the Allow applications downloaded from: section. This is necessary because it will not be possible to run the MAMP installation you downloaded without it: Open the image you've downloaded and simply move the Mamp folder to your Applications folder. That's all. Now that you've MAMP installed on your system, you may launch MAMP.app (located at Applications | Mamp | Mamp.app). While you're editing your MAMP settings, MAMP might prompt you for an administrator password. This is required because it needs to run two processes: httpd (Apache) and mysqld (MySQL). Depending on the settings you set for those processes, you may or may not need to enter your password. Once you open MAMP, click on Preferences button. Next, click on Ports. The default MAMP ports are 8888 for Apache, and 8889 for MySQL. If you use this configuration, you will not be asked for your password, but you will need to include the port number in the URL when using it (http://localhost:8888). You may change this by setting the Apache port to 80, for which you'll probably have to enter your administrator password. If you have placed your Magento installation in the Shop folder, it is advised to call your Magento installation through the following URL: http://127.0.0.1:8888/shop/, instead of http://localhost:8888/shop/. The reason for this is that Magento may require dots in the URL. The last thing you need to do is visit the Apache tab, where you'll need to set a document root. This is where all of your files are going to be stored for your local web server. An example of a document root is Users | Username | Sites. To start the Apache and MySQL servers, simply click on Start Servers from the main MAMP screen. After the MAMP servers start, the MAMP start page should open in your web browser. If it doesn't, click on Open start page in the MAMP window. From there please select phpMyAdmin. In PhpMyAdmin, you can create a database and start the Magento installation procedure, just like we did when installing Magento on a Windows machine. See the Time for action – installing Magento on a Windows machine section, point 8 to continue the installation of Magento. Of course you need to put the Magento files in your Mamp folder now, instead of the Windows path mentioned in that procedure. In some cases, it is necessary to change the Read & Write permissions of your Magento folder before you can use Magento on Mac. To do that, right-click on the Magento folder, and select the Get Info option. In the bottom of the resulting screen, you will see the folder permissions. Set all of these to Read & Write, if you have trouble in running Magento. Installing Magento at a hosting service There are thousands of hosting providers with as many different hosting setups. The difficulty of explaining the installation of Magento at a commonly used hosting service is that the procedure differs from hosting provider to hosting provider, depending on the tools they use for their services. There are providers, for instance, who use Plesk, DirectAdmin, or cPanel. Although these user environments differ from each other, the basic steps always remain the same: Check the requirements of Magento (there's more information on this topic at the beginning of this article). Upload the Magento installation files using an ftp tool, for instance, Filezilla (download this free at: http://filezilla-project.org). Create a database. This step differs slightly per hosting provider, but often a tool, such as PhpMyAdmin is used. Ask your hosting provider if you're in doubt about this step. You will need: the database name, database user, password, and the name of the database server. Browse to your domain and run the Magento installation process, which is the same as we saw earlier in this article. How to choose a Magento hosting provider One important thing we didn't discuss yet during this article is selecting a hosting provider that is capable of running your online store. We already mentioned that you should not expect performance for a couple of dollars per month. Magento will often still run at a cheap hosting service, but the performance is regularly very poor. So, you should pay attention to your choices here and make sure you make the right decision. Of course everything depends on the expectations for your online store. You should not aim for a top performance, if all you expect to do during your first few years is 10,000 dollars of revenue per year. OK, that's difficult sometimes. It's not always possible to create a detailed estimation of the revenue you may expect. So, let's see what you should pay attention to: Does the hosting provider mention Magento on its website? Or maybe they are even offering special Magento hosting packages? If yes, you are sure that technically Magento will run. There are even hosting providers for which Magento hosting is their speciality. Are you serious about your future Magento store? Then ask for references! Clients already running on Magento at this hosting provider can tell you more about the performance and customer support levels. Sometimes a hosting provider also offers an optimized demo store, which you can check out to see how it is performing. Ask if the hosting provider has Magento experts working for them and if yes, how many. Especially in case of large, high-traffic stores, it is important to hire the knowledge you need. Do not forget to check online forums and just do some research about this provider. However, we must also admit that you will find negative experiences of customers about almost every hosting provider. Are you just searching for a hosting provider to play around with Magento? In that case any cheap hosting provider would do, although your Magento store could be very slow. Take for instance, Hostgator (http://hostgator.com), which offers small hosting plans for a couple of U.S. dollars per month. Anyway, a lot of hosts are offering a free trial period, which you may use to test the performance. Installatron Can't this all be done a bit more easily? Yes, that's possible. If your host offers a service named Installatron and if it also includes Magento within it, your installation process will become a lot easier. We could almost call it a ''one-click'' installation procedure. Check if your hosting provider is offering the latest Magento version; this may not always be the case! Of course you may ask your (future) hosting provider if they are offering Installatron on their hosting packages. The example shown is from Simple Helix provider (http://simplehelix.com), a well-known provider specialized in Magento hosting solutions. Time for action – installing Magento using Installatron The following short procedure shows the steps you need to take to install Magento using Installatron: First, locate the Installatron Applications Installer icon in the administration panel of your hosting provider. Normally this is very easy to find, just after logging in: Next, within Installatron Applications Installer, click on the Applications Browser option: Inside Applications Browser, you'll see a list of CMS solutions and webshop software that you can install. Generally Magento can be located in the e-Commerce and Business group: Of course, click on Magento and after that click on the Install this application button. The next screen is the setup wizard for installing Magento. It lists a bunch of default settings, such as admin username, database settings, and the like. We recommend to change as little as possible for your first installation. You should pick the right location to install though! In our example, we will choose the test directory on www.boostingecommerce.com: Note that for this installation, we've chosen to install the Magento sample data, which will help us in getting an explanation of the Magento software. It's fine if you're installing for learning purposes, but in a store that is meant to be your live shop, it's better to start off completely empty. In the second part of the installation form, there are a few fields that you have to pay attention to: Switch off automatic updates Set database management to automatic Choose a secure administrator password Click on the Install button when you are done reviewing the form. Installatron will now begin installing Magento. You will receive an e-mail when Installatron is ready. It contains information about the URL you just installed and your login credentials to your newfangled Magento shop. That's all! Our just installed test environment is available at http://www.boostingecommerce.com/test. If all is well, yours should look similar to the following screenshot: How to test the minimum requirements If your host isn't offering Installatron and you would like to install Magento on it, how will you know if it's possible? In other words, will Magento run? Of course you can simply try to install and run Magento, but it's better to check for the minimum requirements before going that route. You can use the following method to test if your hosting provider meets all requirements needed to run Magento. First, create a text file using your favorite editor and name it as phpinfo.php. The contents of the file should be: <?php phpinfo(); ?> Save and upload this file to the root folder of your hosting environment, using an ftp tool such as Filezilla. Next, open your browser using this address: http://yourdomain.com/phpinfo.php; use your own domain name of course. You will see a screen similar to the following: Note that in the preceding screenshot, our XAMPP installation is using PHP 5.4.7. And as we mentioned earlier, Magento isn't compatible with this PHP version yet. So what about that? Well, XAMPP just comes with a recent stable release of PHP. Although it is officially not supported, in most cases your Magento test environment will run fine. Something similar to the previous screenshot will be shown, depending on your PHP (and XAMPP) version. Using this result, we can check for any PHP module that is missing. Just go through the list at the beginning of this article and verify if everything that is needed is available and enabled: What is SSL and do I need it? SSL (Secure Sockets Layer) is the standard for secure transactions on the web. You'll recognize it by websites running on https:// instead of http://. To use it, you need to buy an SSL Certificate and add it to your hosting environment. Some hosting providers offer it as a service, whereas others just point to third parties offering SSL Certificates, like for instance, RapidSSL (https://www.rapidssl.com) or VeriSign (http://www.verisign.com), currently owned by Symantec. We'll not offer a complete set of instructions on using SSL here, which is beyond the scope of this article. However, it is good to know when you'll need to pay attention to SSL. There can be two reasons to use an SSL certificate: You are accepting payments directly on your website and may even be storing credit card information. In such a case, make sure that you are securing your store by using SSL. On the other hand, if you are only using third parties to accept payments, like for example, Google Checkout or PayPal, you do not have to worry about this part. The transaction is done at the (secure part of the) website of your payment service provider and in such a case you do not need to offer SSL. However, there's another reason that makes using SSL interesting for all shop owners: trust. Regular shoppers know that https:// connections are secure and might feel just a bit more comfortable in closing the sale with you. It might seem a little thing, but getting a new customer to trust you is an essential step of the online purchase process. Summary In this article we've gone through several different ways to install Magento. We looked at doing it locally on your own machine using XAMPP or MAMP, or by using a hosting provider to bring your store online. When working with a hosting provider, using the Installatron tool makes the Magento installation very easy. Resources for Article: Further resources on this subject: Magento: Exploring Themes [Article] Getting Started with Magento Development [Article] Integrating Facebook with Magento [Article]
Read more
  • 0
  • 0
  • 3850

article-image-creating-new-forum
Packt
19 Aug 2013
6 min read
Save for later

Creating a new forum

Packt
19 Aug 2013
6 min read
(For more resources related to this topic, see here.) In the WordPress Administration, click on New Forum, which is a subpage of the Forums menu item on the sidebar. You will be taken to a screen that is quite similar to a WordPress post creation page, but slightly different with a few extra areas: If you are not familiar with the WordPress post creation page, the following is a list of the page's features: The Enter Title Here box The long box on the top of the page is your forum title. This, on the forum page, will be what is clicked on, and will also provide the basis for the forum's URL Slug with some changes, as URL Slugs generally have to be letters, numbers, and dashes. So for example, if your forum title is My Product's Support Section, your Slug will probably be my-products-support-section. When you insert the forum title, the URL Slug will be generated below. However, if you wish to change it, click on the yellow highlighted section to change the Slug, and then click on OK. The Post box Beneath the title box is the post box. This should contain your forum description. This will be shown beneath your forum's name on the forum index page. You can add rich text to this, such as bold or italicized text, but my advice is to keep this short. One or two lines of text would suffice, otherwise it could make your forum look peculiar. Forum attributes Towards the right-hand side of the screen, you should see a Forum Attributes section. bbPress allows to set a number of different attributes for your created forum. The attributes are explained in detail as follows: Forum type: Your forum can be one of two types: "Forum" or "Category". Category is a section of the site where you cannot post, but forums are grouped in. So for example, if you have forums for "Football", "Cricket", and "Athletics", you may group them into a "Sport" category. Unless you have a large forum with a number of different areas, you shouldn't need many categories. Normally you would begin with a few forums, but then as your forums grow, you would introduce categories. If you create a category, any forum you create must be a subforum of the category. We will talk about creating subforums later in this article. Status: Your forum's status indicates if other users can post in the forum. If the status is "Open", any user can post in the forum. If the forum is "Closed", nobody can contribute other than Keymasters. Unless one of your forums is a "Forum Rules" forum, you would probably keep all forums as Open. Visibility: bbPress allows three types of forum visibility . These, as the names suggest, decide who gets to see the forums. The three options are as follows: Public: This type allows anybody visiting the site to see the forum and its contents. Private: This type allows users who are logged in to view and contribute to the forum, but the forum is hidden from users that are not logged in or users that are blocked. Private forums are prefixed with the word "Private". Hidden: This type allows only Moderators and Keymasters to view the forum. Most forums will probably have majority of their forums set to Public, but have selections that are Private or Hidden. Usually, having a Hidden forum to discuss forum matters with Administrators or Moderators is a good thing. You can have a private forum as well that could help encourage people to register on the site. Parent: You can have subforums of forums. By giving a parent to the forum, you make it a subforum. An example of this would be if you had a "Travel" forum, you can have subforums dedicated to "Europe", "Australia", and "Asia". Again, you will probably start with just a few forums, but over time, you will probably grow your forum to include subforums. Order: The Order field helps define the order in which your forums are listed. By default, or if unspecified, the order is always alphabetical. However, if you give a number, then the order of the forum will be determined by the Order number, from smallest to largest. It is good to put important forums at the top, and less important forums towards the bottom of the page. It's a good idea to number your orders in multiples of 10, rather than 1, 2, 3, and so on. That way, if you want to add a forum to your site that will be between two other forums, you can add it in with a number between the two multiples of 10, thus saving time. Now that you have set up a forum, click on publish, and congratulations, you should have a forum! Editing and deleting forums Forums are a community, and like all good communities, they evolve over time depending on their user's needs. As such, over time, you may need to restructure or delete forums. Luckily, this is easily done. First, click on Forums in the sidebar of the WordPress Administration. You should see a list of all the current forums you have on your site: If you hover over a forum, two options will appear: Edit, which will allow you to edit the forum. A screen similar to the New Forum page will appear, which will allow you to make changes to your forum. The second option is Trash, which will move your forum into Trash. After a while, it will be deleted from your site. When you click on Trash, you will trash everything associated with your forum (any topics, replies, or tags will be deleted). Be careful! Summary Right now, you should have a bustling forum, ably overseen by yourself and maybe even a couple of Moderators.Remember that all I have described so far has been how to use bbPress to manage your forum, and not how to manage your forum. Each forum will have its own rules and guidelines, and you will eventually learn how to manage your bbPress forum with more and more members joining in.A general rule of thumb, though, is set out your rules at the start of your forum, welcome change, act quickly on violations, and most importantly, treat your users with respect. As without users, you will have a very quiet forum. However, bbPress is a WordPress plugin, and in itself can be extensible and can take advantage of plugins and themes, both specifically designed for bbPress or even those that work with WordPress. Resources for Article: Further resources on this subject: Getting Started with WordPress 3 [Article] How to Create an Image Gallery in WordPress 3 [Article] Integrating phpList 2 with WordPress [Article]
Read more
  • 0
  • 0
  • 2629
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-mailbox-database-management
Packt
19 Aug 2013
10 min read
Save for later

Mailbox Database Management

Packt
19 Aug 2013
10 min read
(For more resources related to this topic, see here.) Determining the average mailbox size per database PowerShell is very flexible and gives you the ability to generate very detailed reports. When generating mailbox database statistics, we can utilize data returned from multiple cmdlets provided by the Exchange Management Shell. This section will show you an example of this, and you will learn how to calculate the average mailbox size per database using PowerShell. How to do it... To determine the average mailbox size for a given database, use the following one-liner: Get-MailboxStatistics -Database DB1 | ForEach-Object {$_.TotalItemSize.value.ToMB()} | Measure-Object -Average | Select-Object –ExpandProperty Average How it works... Calculating an average is as simple as performing some basic math, but PowerShell gives us the ability to do this quickly with the Measure-Object cmdlet. The example uses the Get-MailboxStatistics cmdlet to retrieve all the mailboxes in the DB1 database. We then loop through each one, retrieving only the TotalItemSize property, and inside the ForEach-Object script block we convert the total item size to megabytes. The result from each mailbox can then be averaged using the Measure-Object cmdlet. At the end of the command, you can see that the Select-Object cmdlet is used to retrieve only the value for the Average property. The number returned here will give us the average mailbox size in total for regular mailboxes, archive mailboxes, as well as any other type of mailbox that has been disconnected. If you want to be more specific, you can filter out these mailboxes after running the Get-MailboxStatistics cmdlet: Get-MailboxStatistics -Database DB1 | Where-Object{!$_.DisconnectDate -and !$_.IsArchive} | ForEach-Object {$_.TotalItemSize.value.ToMB()} | Measure-Object -Average | Select-Object –ExpandProperty Average Notice that, in the preceding example, we have added the Where-Object cmdlet to filter out any mailboxes that have a DisconnectDate defined or where the IsArchive property is $true. Another thing that you may want to do is round the average. Let's say the DB1 database contained 42 mailboxes and the total size of the database was around 392 megabytes. The value returned from the preceding command would roughly look something like 2.39393939393939. Rarely are all those extra decimal places of any use. Here are a couple of ways to make the output a little cleaner: $MBAvg = Get-MailboxStatistics -Database DB1 | ForEach-Object {$_.TotalItemSize.value.ToMB()} | Measure-Object -Average | Select-Object –ExpandProperty Average[Math]::Round($MBAvg,2) You can see that this time, we stored the result of the one-liner in the $MBAvg variable. We then use the Round method of the Math class in the .NET Framework to round the value, specifying that the result should only contain two decimal places. Based on the previous information, the result of the preceding command would be 2.39. We can also use string formatting to specify the number of decimal places to be used: [PS] "{0:n2}" -f $MBAvg2.39 Keep in mind that this command will return a string, so if you need to be able to sort on this value, cast it to double: [PS] [double]("{0:n2}" -f $MBAvg)2.39 The -f format operator is documented in PowerShell's help system in about_operators. There's more... The previous examples have only shown how to determine the average mailbox size for a single database. To determine this information for all mailbox databases, we can use the following code (save it to a file called size.ps1): foreach($DB in Get-MailboxDatabase) { Get-MailboxStatistics -Database $DB | ForEach-Object{$_.TotalItemSize.value.ToMB()} |Measure-Object -Average | Select-Object @{n="Name";e={$DB.Name}}, @{n="AvgMailboxSize";e={[Math] ` ::Round($_.Average,2)}} | Sort-Object ` AvgMailboxSize -Desc} The result of this command would look something like this: This example is very similar to the one we looked at previously. The difference is that, this time, we are running our one-liner using a foreach loop for every mailbox database in the organization. When each mailbox database has been processed, we sort the output based on the AvgMailboxSize property. Restoring data from a recovery database When it comes to recovering data from a failed database, you have several options depending on what kind of backup product you are using or how you have deployed Exchange 2013. The ideal method for enabling redundancy is to use a DAG, which will replicate your mailbox databases to one or more servers and provide automatic failover in the event of a disaster. However, you may need to pull old data out of a database restored from a backup. In this section, we will take a look at how you can create a recovery database and restore data from it using the Exchange Management Shell. How to do it... First, restore the failed database using the steps required by your current backup solution. For this example, let's say that we have restored the DB1 database file to E:RecoveryDB1 and the database has been brought to a clean shutdown state. We can use the following steps to create a recovery database and restore mailbox data: Create a recovery database using the New-MailboxDatabase cmdlet: New-MailboxDatabase -Name RecoveryDB `-EdbFilePath E:RecoveryDB1DB1.edb `-LogFolderPath E:RecoveryDB01 `-Recovery `-Server MBX1 When you run the preceding command, you will see a warning that the recovery database was created using the existing database file. The next step is to check the state of the database, followed by mounting the database: Eseutil /mh .DB1.edbEseutil /R E02 /DMount-Database -Identity RecoveryDB Next, query the recovery database for all mailboxes that reside in the database RecoveryDB: Get-MailboxStatistics –Database RecoveryDB | fl DisplayName,MailboxGUID,LegacyDN Lastly, we will use the New-MailboxRestoreRequest cmdlet to restore the data from the recovery database for a single mailbox: New-MailboxRestoreRequest -SourceDatabase RecoveryDB `-SourceStoreMailbox "Joe Smith" `-TargetMailbox joe.smith When running the eseutil commands, make sure you are in the folder where the restored mailbox database and logs are placed. How it works... When you restore the database file from your backup application, you may need to ensure that the database is in a clean shutdown state. For example, if you are using Windows Server Backup for your backup solution, you will need to use the Eseutil.exe database utility to play any uncommitted logs into the database to get it in a clean shutdown state. Once the data is restored, we can create a recovery database using the New-MailboxDatabase cmdlet, as shown in the first example. Notice that when we ran the command we used several parameters. First, we specified the path to the EDB file and the logfiles, both of which are in the same location where we restored the files. We have also used the -Recovery switch parameter to specify that this is a special type of database that will only be used for restoring data and should not be used for production mailboxes. Finally, we specified which mailbox server the database should be hosted on using the -Server parameter. Make sure to run the New-MailboxDatabase cmdlet from the mailbox server that you are specifying in the -Server parameter, and then mount the database using the Mount-Database cmdlet. The last step is to restore data from one or more mailboxes. As we saw in the previous example, New-MailboxRestoreRequest is the tool to use for this task. This cmdlet was introduced in Exchange 2010 SP1, so if you have used this process in the past, the procedure is the same with Exchange 2013. There's more… When you run the New-MailboxRestoreRequest cmdlet, you need to specify the identity of the mailbox you wish to restore using the -SourceStoreMailbox parameter. There are three possible values you can use to provide this information: DisplayName, MailboxGuid, and LegacyDN . To retrieve these values, you can use the Get-MailboxStatistics cmdlet once the recovery database is online and mounted: Get-MailboxStatistics -Database RecoveryDB | fl DisplayName,MailboxGUID,LegacyDN Here we have specified that we want to retrieve all three of these values for each mailbox in the RecoveryDB database. Understanding target mailbox identity When restoring data with the New-MailboxRestoreRequest cmdlet, you also need to provide a value for the -TargetMailbox parameter. The mailbox needs to already exist before running this command. If you are restoring data from a backup for an existing mailbox that has not changed since the backup was done, you can simply provide the typical identity values for a mailbox for this parameter. If you want to restore data to a mailbox that was not the original source of the data, you need to use the -AllowLegacyDNMismatch switch parameter. This will be useful if you are restoring data to another user's mailbox, or if you've recreated the mailbox since the backup was taken. Learning about other useful parameters The New-MailboxRestoreRequest cmdlet can be used to granularly control how data is restored out of a mailbox. The following parameters may be useful to customize the behavior of your restores: ConflictResolutionOption: This parameter specifies the action to take if multiple matching messages exist in the target mailbox. The possible values are KeepSourceItem, KeepLatestItem, or KeepAll. If no value is specified, KeepSourceItem will be used by default. ExcludeDumpster: Use this switch parameter to indicate that the dumpster should not be included in the restore. SourceRootFolder: Use this parameter to restore data only from a root folder of a mailbox. TargetIsArchive: You can use this switch parameter to perform a mailbox restore to a mailbox archive. TargetRootFolder: This parameter can be used to restore data to a specific folder in the root of the target mailbox. If no value is provided, the data is restored and merged into the existing folders, and, if they do not exist, they will be created in the target mailbox. These are just a few of the useful parameters that can be used with this cmdlet, but there are more. For a complete list of all the available parameters and full details on each one, run Get-Help New-MailboxRestoreRequest -Detailed. Understanding mailbox restore request cmdlets There is an entire cmdlet set for mailbox restore requests in addition to the New-MailboxRestoreRequest cmdlet. The remaining available cmdlets are outlined as follows: Get-MailboxRestoreRequest: Provides a detailed status of mailbox restore requests Remove-MailboxRestoreRequest : Removes fully or partially completed restore requests Resume-MailboxRestoreRequest : Resumes a restore request that was suspended or failed Set-MailboxRestoreRequest: Can be used to change the restore request options after the request has been created Suspend-MailboxRestoreRequest: Suspends a restore request any time after the request was created but before the request reaches the status of Completed For complete details and examples for each of these cmdlets, use the Get-Help cmdlet with the appropriate cmdlet using the -Full switch parameter. Taking it a step further Let's say that you have restored your database from backup, you have created a recovery database, and now you need to restore each mailbox in the backup to the corresponding target mailboxes that are currently online. We can use the following script to accomplish this: $mailboxes = Get-MailboxStatistics -Database RecoveryDBforeach($mailbox in $mailboxes) { New-MailboxRestoreRequest -SourceDatabase RecoveryDB ` -SourceStoreMailbox $mailbox.DisplayName ` -TargetMailbox $mailbox.DisplayName } Here you can see that first we use the Get-MailboxStatistics cmdlet to retrieve all the mailboxes in the recovery database and store the results in the $mailboxesvariable. We then loop through each mailbox and restore the data to the original mailbox. You can track the status of these restores using the Get-MailboxRestoreRequest cmdlet and the Get-MailboxRestoreRequestStatistics cmdlet. Summary Thus in this article, we covered a very small but an appetizing part of mailbox database management by determining the average mailbox size per database and restoring data from a recovery database. Resources for Article : Further resources on this subject: Connecting to Microsoft SQL Server Compact 3.5 with Visual Studio [Article] Microsoft SQL Azure Tools [Article] SQL Server 2008 R2: Multiserver Management Using Utility Explorer [Article]
Read more
  • 0
  • 0
  • 2010

article-image-selecting-elements
Packt
16 Aug 2013
17 min read
Save for later

Selecting Elements

Packt
16 Aug 2013
17 min read
(For more resources related to this topic, see here.) Understanding the DOM One of the most powerful aspects of jQuery is its ability to make selecting elements in the DOM easy. The DOM serves as the interface between JavaScript and a web page; it provides a representation of the source HTML as a network of objects rather than as plain text. This network takes the form of a family tree of elements on the page. When we refer to the relationships that elements have with one another, we use the same terminology that we use when referring to family relationships: parents, children, and so on. A simple example can help us understand how the family tree metaphor applies to a document: <html> <head> <title>the title</title> </head> <body> <div> <p>This is a paragraph.</p> <p>This is another paragraph.</p> <p>This is yet another paragraph.</p> </div> </body> </html> Here, <html> is the ancestor of all the other elements; in other words, all the other elements are descendants of <html>. The <head> and <body> elements are not only descendants, but children of <html> as well. Likewise, in addition to being the ancestor of <head> and <body>, <html> is also their parent. The <p> elements are children (and descendants) of <div>, descendants of <body> and <html>, and siblings of each other. To help visualize the family tree structure of the DOM, we can use a number of software tools, such as the Firebug plugin for Firefox or the Web Inspector in Safari or Chrome. With this tree of elements at our disposal, we'll be able to use jQuery to efficiently locate any set of elements on the page. Our tools to achieve this are jQuery selectors and traversal methods. Using the $() function The resulting set of elements from jQuery's selectors and methods is always represented by a jQuery object. Such a jQuery object is very easy to work with when we want to actually do something with the things that we find on a page. We can easily bind events to these objects and add slick effects to them, as well as chain multiple modifications or effects together. Note that jQuery objects are different from regular DOM elements or node lists, and as such do not necessarily provide the same methods and properties for some tasks. In order to create a new jQuery object, we use the $() function. This function typically accepts a CSS selector as its sole parameter and serves as a factory returning a new jQuery object pointing to the corresponding elements on the page. Just about anything that can be used in a stylesheet can also be passed as a string to this function, allowing us to apply jQuery methods to the matched set of elements. Making jQuery play well with other JavaScript libraries In jQuery, the dollar sign ($) is simply an alias for jQuery. Because a $() function is very common in JavaScript libraries, conflicts could arise if more than one of these libraries were being used in a given page. We can avoid such conflicts by replacing every instance of $ with jQuery in our custom jQuery code. The three primary building blocks of selectors are tag name, ID, and class. They can be used either on their own or in combination with others. The following simple examples illustrate how these three selectors appear in code: Selector type CSS jQuery What it does Tag name p { } $('p') This selects all paragraphs in the document. ID #some-id { } $('#some-id') This selects the single element in the document that has an ID of some-id. Class .some-class { } $('.some-class') This selects all elements in the document that have a class of some-class. When we call methods of a jQuery object, the elements referred by the selector we passed to $() are looped through automatically and implicitly. Therefore, we can usually avoid explicit iteration, such as a for loop, that is so often required in DOM scripting. Now that we have covered the basics, we're ready to start exploring some more powerful uses of selectors. CSS selectors The jQuery library supports nearly all the selectors included in CSS specifications 1 through 3, as outlined on the World Wide Web Consortium's site: http://www.w3.org/Style/CSS/specs. This support allows developers to enhance their websites without worrying about which browsers might not understand more advanced selectors, as long as the browsers have JavaScript enabled. Progressive Enhancement Responsible jQuery developers should always apply the concepts of progressive enhancement and graceful degradation to their code, ensuring that a page will render as accurately, even if not as beautifully, with JavaScript disabled as it does with JavaScript turned on. We will continue to explore these concepts throughout the article. More information on progressive enhancement can be found at http://en.wikipedia.org/wiki/Progressive_enhancement. To begin learning how jQuery works with CSS selectors, we'll use a structure that appears on many websites, often for navigation – the nested unordered list: <ul id="selected-plays"> <li>Comedies <ul> <li><a href="/asyoulikeit/">As You Like It</a></li> <li>All's Well That Ends Well</li> <li>A Midsummer Night's Dream</li> <li>Twelfth Night</li> </ul> </li> <li>Tragedies <ul> <li><a href="hamlet.pdf">Hamlet</a></li> <li>Macbeth</li> <li>Romeo and Juliet</li> </ul> </li> <li>Histories <ul> <li>Henry IV (<a href="mailto:henryiv@king.co.uk">email</a>) <ul> <li>Part I</li> <li>Part II</li> </ul> <li><a href="http://www.shakespeare.co.uk/henryv.htm"> Henry V</a></li> <li>Richard II</li> </ul> </li> </ul> Notice that the first <ul> has an ID of selecting-plays, but none of the <li> tags have a class associated with them. Without any styles applied, the list looks like this: The nested list appears as we would expect it to—a set of bulleted items arranged vertically and indented according to their level. Styling list-item levels Let's suppose that we want the top-level items, and only the top-level items—Comedies, Tragedies, and Histories — to be arranged horizontally. We can start by defining a horizontal class in the stylesheet: .horizontal { float: left; list-style: none; margin: 10px; } The horizontal class floats the element to the left-hand side of the one following it, removes the bullet from it if it's a list item, and adds a 10-pixel margin on all sides of it. Rather than attaching the horizontal class directly in our HTML, we'll add it dynamically to the top-level list items only, to demonstrate jQuery's use of selectors: $(document).ready(function() { $('#selected-plays > li').addClass('horizontal '); }); Listing 2.1 We begin the jQuery code by calling $(document).ready(), which runs the function passed to it once the DOM has been loaded, but not before. The second line uses the child combinator (>) to add the horizontal class to all the top-level items only. In effect, the selector inside the $() function is saying, "Find each list item (li) that is a child (>) of the element with an ID of selected-plays (#selected-plays)". With the class now applied, the rules defined for that class in the stylesheet take effect, which in this case means that the list items are arranged horizontally rather than vertically. Now our nested list looks like this: Styling all the other items—those that are not in the top level—can be done in a number of ways. Since we have already applied the horizontal class to the top-level items, one way to select all sub-level items is to use a negation pseudo-class to identify all list items that do not have a class of horizontal. Note the addition of the third line of code: $(document).ready(function() { $('#selected-plays > li').addClass('horizontal'); $('#selected-plays li:not(.horizontal)').addClass('sub- level');li:not(.horizontal)').addClass('sub-level'); }); Listing 2.2 This time we are selecting every list item (<li>) that: Is a descendant of the element with an ID of selected-plays (#selected-plays) Does not have a class of horizontal (:not(.horizontal)) When we add the sub-level class to these items, they receive the shaded background defined in the stylesheet: .sub-level { background: #ccc; } Now the nested list looks like this: Attribute selectors Attribute selectors are a particularly helpful subset of CSS selectors. They allow us to specify an element by one of its HTML attributes, such as a link's title attribute or an image's alt attribute. For example, to select all images that have an alt attribute, we write the following: $('img[alt]') Styling links Attribute selectors accept a wildcard syntax inspired by regular expressions for identifying the value at the beginning (^) or end ($) of a string. They can also take an asterisk (*) to indicate the value at an arbitrary position within a string or an exclamation mark (!) to indicate a negated value. Let's say we want to have different styles for different types of links. We first define the styles in our stylesheet: a { color: #00c; } a.mailto { background: url(images/email.png) no-repeat right top; padding-right: 18px; } a.pdflink { background: url(images/pdf.png) no-repeat right top; padding-right: 18px; } a.henrylink { background-color: #fff; padding: 2px; border: 1px solid #000; } Then, we add the three classes—mailto, pdflink, and henrylink—to the appropriate links using jQuery. To add a class for all e-mail links, we construct a selector that looks for all anchor elements (a) with an href attribute ([href]) that begins with mailto: (^="mailto:"), as follows: $(document).ready(function() { $('a[href^="mailto:"]').addClass('mailto'); }); Listing 2.3 Because of the rules defined in the page's stylesheet, an envelope image appears after the mailto: link on the page. To add a class for all the links to PDF files, we use the dollar sign rather than the caret symbol. This is because we're selecting links with an href attribute that ends with .pdf: $(document).ready(function() { $('a[href^="mailto:"]').addClass('mailto'); $('a[href$=".pdf"]').addClass('pdflink'); }); Listing 2.4 The stylesheet rule for the newly added pdflink class causes an Adobe Acrobat icon to appear after each link to a PDF document, as shown in the following screenshot: Attribute selectors can be combined as well. We can, for example, add the class henrylink to all links with an href value that both starts with http and contains henry anywhere: $(document).ready(function() { $('a[href^="mailto:"]').addClass('mailto'); $('a[href$=".pdf"]').addClass('pdflink'); $('a[href^="http"][href*="henry"]') .addClass('henrylink'); }); }); Listing 2.5 With the three classes applied to the three types of links, we should see the following: Note the PDF icon to the right-hand side of the Hamlet link, the envelope icon next to the email link, and the white background and black border around the Henry V link. Custom selectors To the wide variety of CSS selectors, jQuery adds its own custom selectors. These custom selectors enhance the already impressive capabilities of CSS selectors to locate page elements in new ways. Performance note When possible, jQuery uses the native DOM selector engine of the browser to find elements. This extremely fast approach is not possible when custom jQuery selectors are used. For this reason, it is recommended to avoid frequent use of custom selectors when a native option is available and performance is very important. Most of the custom selectors allow us to choose one or more elements from a collection of elements that we have already found. The custom selector syntax is the same as the CSS pseudo-class syntax, where the selector starts with a colon (:). For example, to select the second item from a set of <div> elements with a class of horizontal, we write this: $('div.horizontal:eq(1)') Note that :eq(1) selects the second item in the set because JavaScript array numbering is zero-based, meaning that it starts with zero. In contrast, CSS is one-based, so a CSS selector such as $('div:nth-child(1)') would select all div selectors that are the first child of their parent. Because it can be difficult to remember which selectors are zero-based and which are one-based, we should consult the jQuery API documentation at http://api.jquery.com/category/selectors/ when in doubt. Styling alternate rows Two very useful custom selectors in the jQuery library are :odd and :even. Let's take a look at how we can use one of them for basic table striping given the following tables: <h2>Shakespeare's Plays</h2> <table> <tr> <td>As You Like It</td> <td>Comedy</td> <td></td> </tr> <tr> <td>All's Well that Ends Well</td> <td>Comedy</td> <td>1601</td> </tr> <tr> <td>Hamlet</td> <td>Tragedy</td> <td>1604</td> </tr> <tr> <td>Macbeth</td> <td>Tragedy</td> <td>1606</td> </tr> <tr> <td>Romeo and Juliet</td> <td>Tragedy</td> <td>1595</td> </tr> <tr> <td>Henry IV, Part I</td> <td>History</td> <td>1596</td> </tr> <tr> <td>Henry V</td> <td>History</td> <td>1599</td> </tr> </table> <h2>Shakespeare's Sonnets</h2> <table> <tr> <td>The Fair Youth</td> <td>1–126</td> </tr> <tr> <td>The Dark Lady</td> <td>127–152</td> </tr> <tr> <td>The Rival Poet</td> <td>78–86</td> </tr> </table> With minimal styles applied from our stylesheet, these headings and tables appear quite plain. The table has a solid white background, with no styling separating one row from the next, as shown in the following screenshot: Now we can add a style to the stylesheet for all the table rows and use an alt class for the odd rows: tr { background-color: #fff; } .alt { background-color: #ccc; } Finally, we write our jQuery code, attaching the class to the odd-numbered table rows (<tr> tags): $(document).ready(function() { $('tr:even').addClass('alt'); }); Listing 2.6 But wait! Why use the :even selector for odd-numbered rows? Well, just as with the :eq() selector, the :even and :odd selectors use JavaScript's native zero-based numbering. Therefore, the first row counts as zero (even) and the second row counts as one (odd), and so on. With this in mind, we can expect our simple bit of code to produce tables that look like this: Note that for the second table, this result may not be what we intend. Since the last row in the Plays table has the alternate gray background, the first row in the Sonnets table has the plain white background. One way to avoid this type of problem is to use the :nth-child() selector instead, which counts an element's position relative to its parent element rather than relative to all the elements selected so far. This selector can take a number, odd, or even as its argument: $(document).ready(function() { $('tr:nth-child(odd)').addClass('alt'); }); Listing 2.7 As before, note that :nth-child() is the only jQuery selector that is one-based. To achieve the same row striping as we did earlier—except with consistent behavior for the second table—we need to use odd rather than even as the argument. With this selector in place, both tables are now striped nicely, as shown in the following screenshot: Finding elements based on textual content For one final custom-selector touch, let's suppose for some reason we want to highlight any table cell that referred to one of the Henry plays. All we have to do—after adding a class to the stylesheet to make the text bold and italicized ( .highlight {font-weight:bold; font-style: italic;} )—is add a line to our jQuery code using the :contains() selector: $(document).ready(function() { $('tr:nth-child(odd)').addClass('alt'); $('td:contains(Henry)').addClass('highlight'); }); Listing 2.8 So, now we can see our lovely striped table with the Henry plays prominently featured: It's important to note that the :contains() selector is case sensitive. Using $('td:contains(henry)') instead, without the uppercase "H", would select no cells. Admittedly, there are ways to achieve the row striping and text highlighting without jQuery—or any client-side programming, for that matter. Nevertheless, jQuery, along with CSS, is a great alternative for this type of styling in cases where the content is generated dynamically and we don't have access to either the HTML or server-side code. Form selectors The capabilities of custom selectors are not limited to locating elements based on their position. For example, when working with forms, jQuery's custom selectors and complementary CSS3 selectors can make short work of selecting just the elements we need. The following table describes a handful of these form selectors: Selector Match :Input Input, text area, select, and button elements :button Button elements and input elements with a type attribute equal to button :enabled Form elements that are enabled :disabled Form elements that are disabled :checked Radio buttons or checkboxes that are checked :selected Option elements that are selected As with the other selectors, form selectors can be combined for greater specificity. We can, for example, select all checked radio buttons (but not checkboxes) with $('input[type="radio"]:checked') or select all password inputs and disabled text inputs with $('input[type="password"], input[type="text"]:disabled'). Even with custom selectors, we can use the same basic principles of CSS to build the list of matched elements. Summary With the techniques that we have covered in this article, we should now be able to locate sets of elements on the page in a variety of ways. In particular, we learned how to style top-level and sub-level items in a nested list by using basic CSS selectors, how to apply different styles to different types of links by using attribute selectors, add rudimentary striping to a table by using either the custom jQuery selectors :odd and :even or the advanced CSS selector :nth-child(), and highlight text within certain table cells by chaining jQuery methods. Resources for Article: Further resources on this subject: Using jQuery and jQueryUI Widget Factory plugins with RequireJS [Article] jQuery Animation: Tips and Tricks [Article] New Effects Added by jQuery UI [Article]
Read more
  • 0
  • 0
  • 1761

article-image-quick-start-your-first-sinatra-application
Packt
14 Aug 2013
15 min read
Save for later

Quick start - your first Sinatra application

Packt
14 Aug 2013
15 min read
(For more resources related to this topic, see here.) Step 1 – creating the application The first thing to do is set up Sinatra itself, which means creating a Gemfile. Open up a Terminal window and navigate to the directory where you're going to keep your Sinatra applications. Create a directory called address-book using the following command: mkdir address-book Move into the new directory: cd address-book Create a file called Gemfile: source 'https://rubygems.org'gem 'sinatra' Install the gems via bundler: bundle install You will notice that Bundler will not just install the sinatra gem but also its dependencies. The most important dependency is Rack (http://rack.github.com/), which is a common handler layer for web servers. Rack will be receiving requests for web pages, digesting them, and then handing them off to your Sinatra application. If you set up your Bundler configuration as indicated in the previous section, you will now have the following files: .bundle: This is a directory containing the local configuration for Bundler Gemfile: As created previously Gemfile.lock: This is a list of the actual versions of gems that are installed vendor/bundle: This directory contains the gems You'll need to understand the Gemfile.lock file. It helps you know exactly which versions of your application's dependencies (gems) will get installed. When you run bundle install, if Bundler finds a file called Gemfile.lock, it will install exactly those gems and versions that are listed there. This means that when you deploy your application on the Internet, you can be sure of which versions are being used and that they are the same as the ones on your development machine. This fact makes debugging a lot more reliable. Without Gemfile.lock, you might spend hours trying to reproduce behavior that you're seeing on your deployed app, only to discover that it was caused by a glitch in a gem version that you haven't got on your machine. So now we can actually create the files that make up the first version of our application. Create address-book.rb: require 'sinatra/base'class AddressBook < Sinatra::Base get '/' do 'Hello World!' endend This is the skeleton of the first part of our application. Line 1 loads Sinatra, line 3 creates our application, and line 4 says we handle requests to '/'—the root path. So if our application is running on myapp.example.com, this means that this method will handle requests to http://myapp.example.com/. Line 5 returns the string Hello World!. Remember that a Ruby block or a method without explicit use of the return keyword will return the result of its last line of code. Create config.ru: $: << File.dirname(__FILE__)require 'address-book'run AddressBook.new This file gets loaded by rackup, which is part of the Rack gem. Rackup is a tool that runs rack-based applications. It reads the configuration from config.ru and runs our application. Line 1 adds the current directory to the list of paths where Ruby looks for files to load, line 2 loads the file we just created previously, and line 4 runs the application. Let's see if it works. In a Terminal, run the following command: bundle exec rackup -p 3000 Here rackup reads config.ru, loads our application, and runs it. We use the bundle exec command to ensure that only our application's gems (the ones in vendor/bundle) get used. Bundler prepares the environment so that the application only loads the gems that were installed via our Gemfile. The -p 3000 command means we want to run a web server on port 3000 while we're developing. Open up a browser and go to http://0.0.0.0:3000; you should see something that looks like the following screenshot: Illustration 1: The Hello World! output from the application Logging Have a look at the output in the Terminal window where you started the application. I got the following (line numbers are added for reference): 1 [2013-03-03 12:30:02] INFO WEBrick 1.3.12 [2013-03-03 12:30:02] INFO ruby 1.9.3 (2013-01-15) [x86_64-linux]3 [2013-03-03 12:30:02] INFO WEBrick::HTTPServer#start: pid=28551 port=30004 127.0.0.1 - - [03/Mar/2013 12:30:06] "GET / HTTP/1.1" 200 12 0.01425 127.0.0.1 - - [03/Mar/2013 12:30:06] "GET /favicon.ico HTTP/1.1" 404 445 0.0018 Like it or not, you'll be seeing a lot of logs such as this while doing web development, so it's a good idea to get used to noticing the information they contain. Line 1 says that we are running the WEBrick web server. This is a minimal server included with Ruby—it's slow and not very powerful so it shouldn't be used for production applications, but it will do for now for application development. Line 2 indicates that we are running the application on Version 1.9.3 of Ruby. Make sure you don't develop with older versions, especially the 1.8 series, as they're being phased out and are missing features that we will be using in this book. Line 3 tells us that the server started and that it is awaiting requests on port 3000, as we instructed. Line 4 is the request itself: GET /. The number 200 means the request succeeded—it is an HTTP status code that means Success . Line 5 is a second request created by our web browser. It's asking if the site has a favicon, an icon representing the site. We don't have one, so Sinatra responded with 404 (not found). When you want to stop the web server, hit Ctrl + C in the Terminal window where you launched it. Step 2 – putting the application under version control with Git When developing software, it is very important to manage the source code with a version control system such as Git or Mercurial. Version control systems allow you to look at the development of your project; they allow you to work on the project in parallel with others and also to try out code development ideas (branches) without messing up the stable application. Create a Git repository in this directory: git init Now add the files to the repository: git add Gemfile Gemfile.lock address-book.rb config.ru Then commit them: git commit -m "Hello World" I assume you created a GitHub account earlier. Let's push the code up to www.github.com for safe keeping. Go to https://github.com/new. Create a repo called sinatra-address-book. Set up your local repo to send code to your GitHub account: git remote add origin git@github.com:YOUR_ACCOUNT/sinatra-address-book.git Push the code: git push You may need to sort out authentication if this is your first time pushing code. So if you get an error such as the following, you'll need to set up authentication on GitHub: Permission denied (publickey) Go to https://github.com/settings/ssh and add the public key that you generated in the previous section. Now you can refresh your browser, and GitHub will show you your code as follows: Note that the code in my GitHub repository is marked with tags. If you want to follow the changes by looking at the repository, clone my repo from //github.com/joeyates/sinatra-address-book.git into a different directory and then "check out" the correct tag (indicated by a footnote) at each stage. To see the code at this stage, type in the following command: git checkout 01_hello_world If you type in the following command, Git will tell you that you have "untracked files", for example, .bundle: git status To get rid of the warning, create a file called .gitignore inside the project and add the following content: /.bundle//vendor/bundle/ Git will no longer complain about those directories. Remember to add .gitignore to the Git repository and commit it. Let's add a README file as the page is requesting, using the following steps: Create the README.md file and insert the following text: sinatra-address-book ==================== An example program of various Sinatra functionality. Add the new file to the repo: git add README.md Commit the changes: git commit -m "Add a README explaining the application" Send the update to GitHub: git push Now that we have a README file, GitHub will stop complaining. What's more is other people may see our application and decide to build on it. The README file will give them some information about what the application does. Step 3 – deploying the application We've used GitHub to host our project, but now we're going to publish it online as a working site. In the introduction, I asked you to create a Heroku account. We're now going to use that to deploy our code. Heroku uses Git to receive code, so we'll be setting up our repository to push code to Heroku as well. Now let's create a Heroku app: heroku createCreating limitless-basin-9090... done, stack is cedarhttp://limitless-basin-9090.herokuapp.com/ | git@heroku.com:limitless-basin-9090.gitGit remote heroku added My Heroku app is called limitless-basin-9090. This name was randomly generated by Heroku when I created the app. When you generate an app, you will get a different, randomly generated name. My app will be available on the Web at the http://limitless-basin-9090.herokuapp.com/ address. If you deploy your app, it will be available on an address based on the name that Heroku has generated for it. Note that, on the last line, Git has been configured too. To see what has happened, use the following command: git remote show heroku* remote heroku Fetch URL: git@heroku.com:limitless-basin-9090.git Push URL: git@heroku.com:limitless-basin-9090.git HEAD branch: (unknown) Now let's deploy the application to the Internet: git push heroku master Now the application is online for all to see: The initial version of the application, running on Heroku Step 4 – page layout with Slim The page looks a bit sad. Let's set up a standard page structure and use a templating language to lay out our pages. A templating language allows us to create the HTML for our web pages in a clearer and more concise way. There are many HTML templating systems available to the Sinatra developer: erb , haml , and slim are three popular choices. We'll be using Slim (http://slim-lang.com/). Let's add the gem: Update our Gemfile: gem 'slim' Install the gem: bundle We will be keeping our page templates as .slim files. Sinatra looks for these in the views directory. Let's create the directory, our new home page, and the standard layout for all the pages in the application. Create the views directory: mkdir views Create views/home.slim: p address book – a Sinatra application When run via Sinatra, this will create the following HTML markup: <p>address book – a Sinatra application</p> Create views/layout.slim: doctype html html head title Sinatra Address Book body == yield Note how Slim uses indenting to indicate the structure of the web page. The most important line here is as follows: == yield This is the point in the layout where our home page's HTML markup will get inserted. The yield instruction is where our Sinatra handler gets called. The result it returns (that is, the web page) is inserted here by Slim. Finally, we need to alter address-book.rb. Add the following line at the top of the file: require 'slim' Replace the get '/' handler with the following: get '/' do slim :home end Start the local web server as we did before: bundle exec rackup -p 3000 The following is the new home page: Using the Slim Templating Engine Have a look at the source for the page. Note how the results of home.slim are inserted into layout.slim. Let's get that deployed. Add the new code to Git and then add the two new files: git add views/*.slim Also add the changes made to the other files: git add address-book.rb Gemfile Gemfile.lock Commit the changes with a comment: git commit -m "Generate HTML using Slim" Deploy to Heroku: git push heroku master Check online that everything's as expected. Step 5 – styling To give a slightly nicer look to our pages, we can use Bootstrap (http://twitter.github.io/bootstrap/); it's a CSS framework made by Twitter. Let's modify views/layout.slim. After the line that says title Sinatra Address Book, add the following code: link href="//netdna.bootstrapcdn.com/twitter-bootstrap/2.3.1/css/bootstrap-combined.min.css" rel="stylesheet"There are a few things to note about this line. Firstly, we will be using a file hosted on a Content Distribution Network (CDN ). Clearly, we need to check that the file we're including is actually what we think it is. The advantage of a CDN is that we don't need to keep a copy of the file ourselves, but if our users visit other sites using the same CDN, they'll only need to download the file once. Note also the use of // at the beginning of the link address; this is called a "protocol agnostic URL". This way of referencing the document will allow us later on to switch our application to run securely under HTTPS, without having to readjust all our links to the content. Now let's change views/home.slim to the following: div class="container" h1 address book h2 a Sinatra application We're not using Bootstrap to anywhere near its full potential here. Later on we can improve the look of the app using Bootstrap as a starting point. Remember to commit your changes and to deploy to Heroku. Step 6 – development setup As things stand, during local development we have to manually restart our local web server every time we want to see a change. Now we are going to set things up with the following steps so the application reloads after each change: Add the following block to the Gemfile: group :development do gem 'unicorn' gem 'guard' gem 'listen' gem 'rb-inotify', :require => false gem 'rb-fsevent', :require => false gem 'guard-unicorn' endThe group around these gems means they will only be installed and used in development mode and not when we deploy our application to the Web. Unicorn is a web server—it's better than WEBrick —that is used in real production environments. WEBrick's slowness can even become noticeable during development, while Unicorn is very fast. rb-inotify and rb-fsevent are the Linux and Mac OS X components that keep a check on your hard disk. If any of your application's files change, guard restarts the whole application, updating the changes. Finally, update your gems: bundle Now add Guardfile: guard :unicorn, :daemonize => true do `git ls-files`.each_line { |s| s.chomp!; watch s }end Add a configuration file for unicorn: mkdir config In config/unicorn.rb, add the following: listen 3000 Run the web server: guard Now if you make any changes, the web server will restart and you will get a notification via a desktop message. To see this, type in the following command: touch address-book.rb You should get a desktop notification saying that guard has restarted the application. Note that to shut guard down, you need to press Ctrl + D . Also, remember to add the new files to Git. Step 7 – testing the application We want our application to be robust. Whenever we make changes and deploy, we want to be sure that it's going to keep working. What's more, if something does not work properly, we want to be able to fix bugs so we know that they won't come back. This is where testing comes in. Tests check that our application works properly and also act as detailed documentation for it; they tell us what the application is intended for. Our tests will actually be called "specs", a term that is supposed to indicate that you write tests as specifications for what your code should do. We will be using a library called RSpec . Let's get it installed. Add the gem to the Gemfile: group :test do gem 'rack-test' gem 'rspec'end Update the gems so RSpec gets installed: bundle Create a directory for our specs: mkdir spec Create the spec/spec_helper.rb file: $: << File.expand_path('../..', __FILE__)require 'address-book'require 'rack/test'def app AddressBook.newendRSpec.configure do |config| config.include Rack::Test::Methodsend Create a directory for the integration specs: mkdir spec/integration Create a spec/integration/home_spec.rb file for testing the home page: require 'spec_helper'describe "Sinatra App" do it "should respond to GET" do get '/' expect(last_response).to be_ok expect(last_response.body).to match(/address book/) endend What we do here is call the application, asking for its home page. We check that the application answers with an HTTP status code of 200 (be_ok). Then we check for some expected content in the resulting page, that is, the address book page. Run the spec: bundle exec rspec Finished in 0.0295 seconds1 example, 0 failures Ok, so our spec is executed without any errors. There you have it. We've created a micro application, written tests for it, and deployed it to the Internet. Summary This article discussed how to perform the core tasks of Sinatra: handling a GET request and rendering a web page. Resources for Article : Further resources on this subject: URL Shorteners – Designing the TinyURL Clone with Ruby [Article] Building tiny Web-applications in Ruby using Sinatra [Article] Setting up environment for Cucumber BDD Rails [Article]  
Read more
  • 0
  • 0
  • 17836

Packt
14 Aug 2013
3 min read
Save for later

nopCommerce – The Public-facing Storefront

Packt
14 Aug 2013
3 min read
(For more resources related to this topic, see here.) General site layout and overview When customers navigate to your store, they will be presented with the homepage. The homepage is where we'll begin to review the site layout and structure. Logo : This is your store logo. As with just about every e-commerce site, this serves as a link back to your homepage. Header links : The toolbar holds some of the most frequently used links, such as Shopping cart, Wishlist, and Account. These links are very customer focused, as this area will also show the customer's logged in status once they are registered with your site. Header menu : The menu holds various links to other important pages, such as New products, Search, and Contact us. It also contains the link to the built-in blog site. Left-side menu : The left-side menu serves as a primary navigation area. It contains the Categories and Manufacturers links as well as Tags and Polls. Center : This area is the main content of the site. It will hold category and product information, as well as the main content of the homepage. Right-side menu : The right-side menu holds links to other ancillary pages in your site, such as Contact us, About us, and News. It also holds the Newsletter signup widget. Footer : The footer holds the copyright information and the Powered by nopCommerce license tag. The naming conventions used for these areas are driven by the Cascading Style Sheet (CSS) definitions. For instance, if you look at the CSS for the Header links area, you will see a definition of header-links. nopCommerce uses layouts to define the overall site structure. A layout is a type of page used in ASP.NET MVC to define a common site template, which is then inherited across all the other pages on your site. In nopCommerce, there are several different layout pages used throughout the site. There are two main layout pages that define the core structure: Root head : This is the base layout page. It contains the header of the HTML that is generated and is responsible for loading all the CSS and JavaScript files needed for the site. Root : This layout is responsible for loading the header, footer, and contains the Master Wrapper, which contains all the other content of the page. These two layouts are common for all pages within nopCommerce, which means every page in the site will display the logo, header links, header menu, and footer. They form the foundation of the site structure. The site pages themselves will utilize one of three other layouts that determine the structure inside the Master Wrapper: Three-column : The three-column layout is what the nopCommerce homepage utilizes. It includes the right side, left side, and center areas. This layout is used primarily on the homepage. Two-column : This is the most common layout that customers will encounter. It includes the left side and center areas. This layout is used on all category and product pages as well as all the ancillary pages. One-column : This layout is used in the shopping cart and checkout pages. It includes the center area only. Changing the layout page used by certain pages requires changing the code. For instance, if we open the product page in Visual Studio, we can see the layout page being used: As you can see, the layout defined for this page is _ColumnsTwo.cshtml, the two-column layout. You can change the layout used by updating this property, for instance, to _ColumnsThree.cshtml, to use the three-column layout.
Read more
  • 0
  • 0
  • 1504
article-image-creating-courses-blackboard-learn
Packt
14 Aug 2013
10 min read
Save for later

Creating Courses in Blackboard Learn

Packt
14 Aug 2013
10 min read
(For more resources related to this topic, see here.) Courses in Blackboard Learn The basic structure of any learning management system relies on the basic course, or course shell. A course shell holds all the information and communication that goes on within our course and is the central location for all activities between students and instructors. Let's think about our course shell as a virtual house or apartment. A house or apartment is made up of different rooms where we put things that we use in our everyday life. These rooms such as the living room, kitchen, or bedrooms can be compared to content areas within our course shell. Within each of these content areas, there are items such as telephones, dishwashers, computers, or televisions that we use to interact, communicate, or complete tasks. These items would be called course tools within the course shell. These content areas and tools are available within our course shells and we can use them in the same ways. While as administrators, we won't take a deep dive into all these tools; we should know that they are available and instructors use them within their courses. Blackboard Learn offers many different ways to create courses, but to help simplify our discussion, we will classify those ways in two categories, basic and advanced. This article will discuss the course creation options that we classify as basic. Course names and course IDs When we get ready to create a course in Blackboard Learn, the system requires a few items. It requires a course name and a course ID. The first one should be self-explanatory. If you are teaching a course on "Underwater Basket Weaving" (a hobby I highly recommend), you would simply place this information into the course name. Now the course ID is a bit trickier. Think of it like a barcode that you can find on your favorite cereal. That barcode is unique and tells the checkout scanner the item you have purchased. The course ID has a similar function in Blackboard Learn. It must be unique; so if you plan to have multiple courses on "Underwater Basket Weaving", you will need to figure out a way to express the differences in each course ID. We just talked about how each course ID in Blackboard has to be unique. We as administrators will find that most Blackboard Learn instances we deal with have numerous course shells. Providing multiple courses to the users might become difficult. So we should consider creating a course ID naming convention if one isn't already in place. Our conversation will not tell you which naming convention will be best for your organization, but here are some helpful tips for us to start with: Use a symbol to separate words, acronyms, and numbers from one another. Some admins may use an underscore, period, or dash. However, whitespace, percent, ampersand, less than, greater than, equals, or plus characters are not accepted within course IDs. If you plan to collect reporting data from your instance, make sure to include the term or session and department in the course ID. Collect input from people and teams within your organization who will enroll and support users. Their feedback about a course's naming convention will help it be successful. Many organizations use a student information system, (SIS), which manages the enrollment process.   Default course properties The first item in our Course Settings area allows us to set up several of the default access options within our courses. The Default Course Properties page covers when and who has access to a course by default. Available by Default: This option gives us the ability to have a course available to enrolled students when it is created. Most administrators will have this set to No, since the instructor may not want to immediately give access to the course. Allow Guests by Default and Allow Observers by Default: The next options allow us to set guest and observer access to created courses by default. Most administrators normally set these to No because the guest access and observer role aren't used by their organizations. Default Enrollment Options: We can set default enrollment options to either allow the instructor or system administrator to enroll students or allow the student to self enroll. If we choose the former, we can give the student the ability to e-mail the instructor to request access. If we set Self Enrollment, we can set dates when this option is available and even set a default access code for students to use when they can self enroll. Now that we have these two options for default enrollment, most administrators would suggest setting the default course enrollment option to instructors or system administrators, which will allow instructors to set self enrollment within their own course. Default Duration: The Continuous option allows the course to run continuously with no start or end date set. Select Dates sets specific start and end dates for all courses. The last option called Days from the Date of Enrollment sets courses to run for a specific number of days after the student was enrolled within our Blackboard Learn environment. This is helpful if a student self enrolls in a self-paced course with a set number of days to complete it. Pitfalls of setting start and end dates When using the Start and End dates to control course duration, we may find that all users enrolled within the course will lose access. Course themes and icons If we are using the Blackboard 2012 theme, we have the ability to enable course themes within our Blackboard instance. These themes are created by Blackboard and can be applied to an instructor's course by clicking on the theme icon, seen in the following screenshot, in the upper-right corner of the content area while in a course. They have a wide variety of options, but currently administrators cannot create custom course themes. We can also select which icon sets courses will use by default in our Blackboard instance. These icon themes are created by Blackboard and will appear beside different content items and tools within the course. In the following screenshot, we can see some of the icons that make up one of the sets. Unlike the course themes, these icons will be enforced across the entire instance. Course Tools The Course Tools area offers us the ability to set what tools and content items are available within courses by default. We can also control these settings along with organizations and system tools by clicking on the Tools link under the Tools and Utilities module. Let's review what tools are available and how to enable and disable them within our courses. The options we use to set course tools are exactly same as those used in the Tools area we just mentioned. Use the information provided here to set tool availability with a page. Let's take a more detailed look into the default availability setting within this page. We have four options for each tool. Every tool has the same options. Default On: A course automatically has this tool available to users, but an instructor or leader can disable the tool within it Default Off: Users in a course will not have access to this tool by default, but the instructor or leader can enable it Always On: Instructors or leaders are unable to turn this tool off in their course or organization Always Off: Users do not see this tool in a course or organization, nor can the instructor or leader turn it on within the course Once we make the changes, we must click on the Submit button. Quick Setup Guide The Quick Setup Guide page was introduced into Blackboard 9.1 Service Pack 8. As seen in the following screenshot, it offers instructors the basic introduction into the course if they have never used Blackboard before. Most of the links are to the content from the On Demand area of the Blackboard website. We as administrators can disable this from appearing when an instructor enters the course. If we leave the guide enabled, we can add custom text to the guide, which can help educate instructors about changes, help, and support available from our organization. Custom images We can continue to customize default look and feel of our course shells with images in the course entry point and at the top of the menu. We might use these images to spotlight that our organization has been honored with an award. Here we find an example of how these images would look. Two images can be located at the bottom of the course entry page, which is the page we see after entering a course. Another image can be located at the top of the course menu. This area also allows us to make these images linkable to a website. Here's an example. Default course size limits We can also create a default course size limit for the course and the course export and archive packages within this area. Course Size Limits allows administrators to control storage space, which may be limited in some instances. When a course size limit is within 10 percent of being reached, the administrator and instructor get an e-mail notification. This notification is triggered by the disk usage task that runs once a day. After getting the notification, the instructor can remove content from the course, or the administrator can increase the course quota for that specific course. Maximum Course disk size: This option sets the amount of disk space a course shell can use for storage. This includes all course and student files within the course shell. Maximum Course Package Size: This sets the maximum amount of content from the Course Files area included in a course copy, export, or archive. Grade Center settings This area allows us to set default controls over the Grade History portion of the Grade Center. Grade history is exactly what it says. It keeps a history of the changes within the Grade Center. Most administrators recommend having grade history enabled by default because of the historical benefits. There may be a discussion within your organization to permit instructors to disable this feature within their course or clear the history altogether. Course menu and structures The course menu offers the main navigation for any course user. Our organization can create a default course menu layout for all new course shells created based on the input from instructional designers and pedagogical experts. As seen in the following screenshot, we simply edit the default menu that appears on this page. As administrators, we should pay close attention when creating a default course menu. Any additions or removals to the default menu are automatically changed without clicking on the Submit or Cancel buttons, and are applied to any courses created from that point forward. Blackboard recently introduced course structures. If enabled, these pre-built course menus are available to the instructor within their course's control panel. The course structures fall into a number of different course instruction scenarios. An example of the course structure selection interface is shown in the following screenshot:
Read more
  • 0
  • 0
  • 2285

article-image-themes-look-and-feel-and-logos
Packt
14 Aug 2013
9 min read
Save for later

Themes, the look and feel, and logos

Packt
14 Aug 2013
9 min read
(For more resources related to this topic, see here.) Themes Themes are sets of predefined styles and can be used to personalize the look and feel of Confluence. Themes can be applied to the entire site and individual spaces. Some themes add extra functionality to Confluence or change the layout significantly. Confluence 5 comes with two themes installed, and an administrator can install new themes as add-ons via the Administration Console. Atlassian is planning on merging the Documentation Theme with the Default Theme . As this is not the case in Confluence 5 yet, we will discuss them both as they have some different features. Keep in mind that, at some point, the Documentation Theme will be removed from Confluence. To change the global Confluence theme, perform the following steps: Browse to the Administration console (Administration | Confluence Admin). Select Themes from the left-hand side menu. All the installed themes will be present. Select the appropriate radio button to select a theme. Choose Confirm to change the theme: Space administrators can also decide on a different theme for their own spaces. Spaces with their own theme selections—and therefore not using the global look and feel—won't be affected if a Confluence Administrator changes the global default theme. To change a space theme, perform the following steps: Go to any page in the space. Select Space Tools in the sidebar. (If you are not using the default theme, select Browse | Space Admin.) Select Look and Feel, followed by Theme. Select the theme you want to apply to the space. Click on Confirm. The Default Theme As the name implies, this is the default theme shipped with Confluence. This Default Theme got a complete overhaul in Confluence 5 and looks as shown in the following screenshot: The Default Theme provides every space with a sidebar, containing useful links and navigation help throughout the current space. With the sidebar, you can quickly change from browsing pages to blog post or vice versa. The sidebar also allows important space content to be added as a link for quicker access, and displays the children of the current page for easy navigation. You can collapse or expand the sidebar. Click-and-drag the border, or use the keyboard shortcut: [. If the sidebar is collapsed, you can still access the sidebar options. Configuring the theme The default theme doesn't have any global configuration available, but a space administrator can make some space-specific changes to the theme's sidebar. Perform the following steps to change the space details: Go to any page in the relevant space. Select Configure sidebar in the space sidebar. Click on the edit icon next to the space title. A pop up will show where you can change the space title and logo (as shown in the preceding screenshot, indicated as 1). Click on Save to save the changes Click on the Done button to exit the configuration mode. The main navigation items on the sidebar (pages and blog posts) can be hidden. This can come in handy, for example, when you don't allow users to add blog posts to the space. To show or hide the main navigation items, perform the following steps: Go to any page in the relevant space. Select Configure sidebar in the space sidebar. Select the - or + icon beside the link to either hide or show the link. Click on the Done button to exit the configuration mode. Space shortcuts are manually added links to the sidebar, linking to important content within the space. A space administrator can manage these links. To add a space shortcut, perform the following steps: Go to any page in the relevant space. Select Configure sidebar in the space sidebar. Click on the Add Link button, indicated as 3 in the preceding screenshot. The Insert Link dialog will appear. Search and select the page you want to link. Click on Insert to add the link to the sidebar. Click on the Done button to exit the configuration mode. The Documentation Theme The Documentation Theme is another bundled theme. It supplies a built-in table of content for your space, a configurable header and footer, and a space-restricted search. The Documentation Theme's default look and feel is displayed in the following screenshot: The sidebar of the Documentation Theme will show a tree with all the pages in your space. Clicking on the icon in front of a page title will expand the branch and show its children. The sidebar can be opened and closed using the [ shortcut, or the icon on the left of the search box in the Confluence header. Configuring the theme The Documentation Theme allows configuration of the sidebar contents, page header and footer, and the possibility to restrict the search to only the current space. A Confluence Administrator can configure the theme globally, but a Space Administrator can overwrite this configuration for his or her own space. To configure the Documentation Theme for a space, the Space Administrator should explicitly select the Documentation Theme as the space theme. The theme configuration of the Documentation Theme allows you to change the properties displayed in the following screenshot. How to get to this screen and what the properties represent will be explained next. To configure the Documentation theme, perform the following steps: As a Confluence Administrator: Browse to the Administration Console (Administration | Confluence Admin). Select Themes from the left-hand side menu. Choose the Documentation Theme as the current theme. Click on the Configure theme link. As a Space administrator: Go to any page in the space. Select Browse | Space Admin. Choose Themes from the left-hand side menu. Make sure that the Documentation Theme is the current theme. Click on the Configure theme link. Select or deselect the Page Tree checkbox. This will determine if your space will display the default search box and page tree in the sidebar. Select or deselect the Limit search results to the current space checkbox. If you select the checkbox: The Confluence search in the top-left corner will only search in the current space. The sidebar will not contain a search box. If you deselect the checkbox: The Confluence search in the top-left corner will search across the entire Confluence site. The sidebar will contain a search box, which is limited to searching in the current space. In the three textboxes, you can enter any text or wiki markup you would like; for example, you could add some information to the sidebar or a notification to every page. The following screenshot will display these areas: Navigation: This will be displayed in the space sidebar. Header: This will be displayed above the title on all the pages in the space. Footer: This will be displayed after the comments on all the pages in the space. Look and feel The look and feel of Confluence can be customized on both global level and space level. Any changes made on a global level will be applied as the default settings for all spaces. A Space Administrator can choose to use a different theme than the global look and feel. When a Space Administrator selects a different theme, the default settings and theme are no longer applied to that space. This also means that settings in a space are not updated if the global settings are updated. In this section, we will cover some basic look and feel changes, such as changing the logo and color-scheme of your Confluence instance. It is also possible to change some of the Confluence layouts; this is covered in the Advanced customizing section. Confluence logo The Confluence logo is the logo that is displayed on the navigation bar in Confluence. This can easily be changed to your company logo. To change the global logo, perform the following steps: Browse to the Administration Console (Administration | Confluence Admin). Select Site Logo from the left-hand side menu Click on Choose File to select the file from your computer. Decide whether to show only your company logo, or also the title of your Confluence installation. If you choose to also show the title, you can change this in the text field next to Site Title. Click on Save. As you might notice, Confluence also changed the color scheme of your installation. Confluence will suggest a color scheme based upon your logo. To revert this change, click on Undo, which is directly available after updating your logo. Space logo Every space can choose its own logo, making it easy to identify certain topics or spaces. A Confluence administrator can also set the default space logo, for newly created spaces or spaces without their own specified logo. The logo of a personal space cannot be changed; it will always use the user's avatar as logo. To set the default space logo, perform the following steps: Browse to the Administration Console (Administration | Confluence Admin). Select Default Space Logo from the left-hand side menu. Click on Choose File to select the file from your computer. For the best result, make sure the image is about 48 x 48 pixels. Click on Upload Logo to upload the default space logo. As a Space Administrator, you can replace the default logo for your space. How this is done depends on the theme you are using. To change the space logo with the default theme, perform the following steps: Go to any page in the relevant space. Click on Configure Sidebar from the sidebar. Select the edit icon next to the page title. Click on Choose File next to the logo and select a file from your computer. Confluence will display an image editor to indicate how your logo should be displayed as shown in the next screenshot. Drag and resize the circle in the editor to the right position. Click on Save to save the changes. Click on the Done button to exit the configuration mode. To change the space logo with the Documentation Theme, perform the following steps: Go to any page in the relevant space. Select Browse | Space Admin. Select Change Space Logo from the left-hand side menu. Select Choose File and select the logo from your computer. Click on Upload Logo to save the new logo; Confluence will automatically resize and crop the logo for you. Summary We went through the different features for changing the look and feel of Confluence so that we can add some company branding to our instance or just to a space. Resources for Article : Further resources on this subject: Advanced JIRA 5.2 Features [Article] Getting Started with JIRA 4 [Article] JIRA: Programming Workflows [Article]
Read more
  • 0
  • 0
  • 2463

article-image-quick-start-creating-your-first-application
Packt
13 Aug 2013
14 min read
Save for later

Quick start - creating your first application

Packt
13 Aug 2013
14 min read
(For more resources related to this topic, see here.) By now you should have Meteor installed and ready to create your first app, but jumping in blindly would be more confusing than not. So let’s take a moment to discuss the anatomy of a Meteor application. We have already talked about how Meteor moves all the workload from the server to the browser, and we have seen firsthand the folder of plugins, which we can incorporate into our apps, so what have we missed? Well MVVM of course. MVVM stands for Model, View, and View-Model. These are the three components that make up a Meteor application. If you’ve ever studied programming academically, then you’ll know there’s a concept called separation of concerns. What this means is that you separate code with different intentions into different components. This allows you to keep things neat, but more importantly—if done right—it allows for better testing and customization down the line. A proper separation is one that allows you to remove a piece of code and replace it with another without disrupting the rest of your app. An example of this could be a simple function. If you print out debug messages to a file throughout your app, it would be a terrible practice to manually write this code out each time. A much better solution would be to “separate” this code out into its own function, and only reference it throughout your app. This way, down the line if you decide you want debug messages to be e-mailed instead of written to a file, you only need to change the one function and your app will continue to work without even knowing about the change. So we know separation is important but I haven’t clarified what MVVM is yet. To get a better idea let’s take a look at what kind of code should go in each component. Model: The Model is the section of your code that has to do with the backend code. This usually refers to your database, but it’s not exclusive to just that. In Meteor, you can generally consider the database to be your application’s model. View: The View is exactly what it sounds like, it’s your application’s view. It’s the HTML that you send to the browser. You want to keep these files as logic-less as possible, this will allow for better separation. It will assure that all your logic code is in one place, and it will help with testing and code re-use. View-Model: Now the View-Model is where all the magic happens. The View-Model has two jobs—one is to interface the model to the view and the second is to handle all the events. Basically, all your logic code will be going here. This is just a brief explanation on the MVVM pattern, but like most things I think an example is in order to better illustrate. Let’s pretend we have a site where people can share pictures, such as a typical social network would. On the Model side, you will have a database which contains all the user’s pictures. Now this is very nice but it’s private info and no user should be able to access it. That’s where the View-Model comes in. The View-Model accesses the main Model, and creates a custom version for the View. So, for instance, it creates a new dataset that only contains pictures from the user’s friends. That is the View-Model’s first job, to create datasets for the View with info from the Model. Next, the View accesses the View-Model and gets the information it needs to display the page; in our example this could be an array of pictures. Now the page is built and both the Model and View are done with their jobs. The last step is to handle page events, for example, the user clicks a button. If you remember, the views are logic-less, so when someone clicks a button, the event is sent back to the View-Model to be processed. If you’re still a bit fuzzy on the concept it should become clearer when we create our first application. Now that we have gone through the concepts we are ready to build our first application. To get started, open a terminal window and create a new folder for your Meteor applications: mkdir ~/meteorApps This creates a new directory in our home folder—which is represented by the tilde (~) symbol—called meteorApps. Next let’s enter this folder by typing: cd ~/meteorApps The cd (change directory) command will move the terminal to the location specified, which in our case is the meteorApps folder. The last step is to actually create a Meteor application and this is done by typing: meteor create firstApp You should be greeted with a message telling you how to run your app but we are going to hold of on that, for now just enter the directory by typing: cd firstAppls The cd command, you should already be familiar with what it does, and the ls function just lists the files in the current directory. If you didn’t play around with the skel folder from the last section, then you should have three files in your app’s folder—an HTML file, a JavaScript file, and a CSS file. The HTML and CSS files are the View in the MVVM pattern, while the JavaScript file is the View-Model. It’s a little difficult to begin explaining everything because we have a sort of chicken and egg paradox where we can’t explain one without the other. But let’s begin with the View as it’s the simpler of the two, and then we will move backwards to the View-Model. The View If you open the HTML file, you should see a couple of lines, mostly standard HTML, but there are a few commands from Meteor’s default templating language—Handlebars. This is not Meteor specific, as Handlebars is a templating language based on the popular mustache library, so you may already be familiar with it, even without knowing Meteor. But just in case, I’ll quickly run through the file: <head> <title>firstApp</title></head> This first part is completely standard HTML; it’s just a pair of head tags, with the page’s title being set inside. Next we have the body tag: <body> {{> hello}}</body> The outer body tags are standard HTML, but inside there is a Handlebars function. Handlebars allows you to define template partials, which are basically pieces of HTML that are given a name. That way you are able to add the piece wherever you want, even multiple times on the same page. In this example, Meteor has made a call to Handlebars to insert the template called hello inside the body tags. It’s a fairly easy syntax to learn; you just open two curly braces then you put a greater-than sign followed by the name of the template, finally closing it o ff with a pair of closing braces. The rest of the file is the definition of the hello template partial: <template name=”hello”> <h1>Hello World!</h1> {{greeting}} <input type=”button” value=”Click” /></template> Again it’s mostly standard HTML, just an H1 title and a button. The only special part is the greeting line in the middle, which is another Handlebars function to insert data. This is how the MVVM pattern works, I said earlier that you want to keep the view as simple as possible, so if you have to calculate anything you do it in the View-Model and then load the results to the View. You do this by leaving a reference; in our code the reference is to greeting , which means you place whatever greeting equals to here. It’s a placeholder for a variable, and if you guessed that the variable greeting will be in the View-Model, then you are 100 percent correct. Another thing to notice is the fact that we do have a button on the page, but you won’t find any event handlers here. That’s because, like I mentioned earlier, the events are handled in the View-Model as well. So it seems like we are done here, and the next logical step is to take a peek at the View-Model. If you remember, the View-Model is the .js file, so close this out and open the firstApp.js file. The JS file There is slightly more code here, but if you’re comfortable with JavaScript, then everything should feel right at home. At first glance you can see that the page is split up into two if statements— Meteor.isClient and Meteor.isServer. This is because the JS file is parsed on both the server and the user’s browser. These statements are used to write code for one and not the other. For now we aren’t going to be dealing with the server, so you don’t have to worry about the bottom section. The top section, on the other hand, has our HTML file’s data. While we were in the View, we saw a call to a template partial named hello and then inside it we referenced a placeholder called greeting . The way to set these placeholders is by referencing the global Template variable, and to set the value by following this pattern: Template.template_name.placeholder_name So in our example it would be: Template.hello.greeting And if you take a look at the first thing inside the isClient variable’s if statement, you will find exactly this. Here, it is set to a function, which returns a simple string. You can set it directly to a string, but then it’s not dynamic. Usually the only reason you are defining a View-Model variable is because it’s something that has to be computed via a function, so that’s why they did it like that. But there are cases where you may just want to reference a simple string, and that’s fine. To recap, so far in the View we have a reference to a piece of data named greeting inside a template partial called hello, which we are setting in the View-Model to the string Welcome to firstApp. The last part of the JS file is the part that handles events on the page; it does this by passing an event-map to a template’s events function. This follows the same notation as the previous, so you type: Template.template_name.events( events_map ); I’ll paste the example’s code here, for further illustration: Template.hello.events({ ‘click input’ : function () { // template data, if any, is available in ‘this’ if (typeof console !== ‘undefined’) console.log(“You pressed the button”); } }); Inside each events object, you place the action and target as the key, and you set a function as the value. The actions are standard JavaScript actions, so you have things such as click, dblclick, keydown, and so on. Targets use standard CSS notation, which is periods for classes, hash symbols for IDs, and just the tag name for HTML tags. Whenever the event happens (for example, the input is clicked) the attached function will be called. To view the full gist of event types, you can take a look at the full list here: http://docs.meteor.com/#template_events It would be a lot shorter if there wasn’t a comment or an if statement to make sure the console is defined. But basically the function will just output the words You pressed the button to the console every time you pressed the button. Pretty intuitive! So we went through the files, all that’s left to do is actually test them. To do this, go back to the terminal, and make sure you’re in the firstApps folder. This can be achieved by using ls again to make sure the three files are there, and by using cd ~/meteorApps/firstApp if you are not looking in the right folder. Next, just type meteor and hit Enter, which will cause Meteor to compile everything together and run the built-in web server. If this is done right, you should see a message saying something like: Running on: http: // localhost:3000/ Navigate your browser to the location specified (http : //localhost:3000), and you should see the app that we just created. If your browser has a console, you can open it up and click the button. Doing so will display the message You pressed the button, similar to the one we saw in the JS file. I hope it all makes sense now, but to drive the point home, we will make a few adjustments of our own. In the terminal window, press Ctrl + C to close the Meteor server, then open up the HTML file. A quick revision After the call to the hello template inside the body tags, add a call to another template named quickStart. Here is the new body section along with the completed quickStart template: <body> {{> hello}} {{> quickStart}}</body><template name=”quickStart”> <h3>Click Counter</h3> The Button has been pressed {{numClick}} time(s) <input type=”button” id=”counter” value=”CLICK ME!!!” /></template> Summary I wanted to keep it as similar to the other template as possible, not to throw too much at you all at once. It simply contains a title enclosed in the header tags followed by a string of text with a placeholder named numClick and a button with an id value of counter. There’s nothing radically different over the other template, so you should be fairly comfortable with it. Now save this and open the JS file. What we are adding to the page is a counter that will display the number of times the button was pressed. We do this by telling Meteor that the placeholder relies on a specific piece of data; Meteor will then track this data and every time it gets changed, the page will be automatically updated. The easiest way to set this up is by using Meteor’s Session object. Session is a key-value store object, which allows you to store and retrieve data inside Meteor. You set data using the set method, passing in a name (key) and value; you can then retrieve that stored info by calling the get method, passing in the same key. Besides the Session object bit, everything else is the same. So just add the following part right after the hello template’s events call, and make sure it’s inside the isClient variable’s if statement: Template.quickStart.numClick = function(){ var pcount = Session.get(“pressed_count”); return (pcount) ? pcount : 0; } This function gets the current number of clicks—stored with a key of pressed_count —and returns it, defaulting to zero if the value was never set. Since we are using the pressed_count property inside the placeholder’s function, Meteor will automatically update this part of the HTML whenever pressed_count changes. Last but not least we have to add the event-map; put the following code snippet right after the previous code: Template.quickStart.events({ ‘click #counter’ : function(){ var pcount = Session.get(“pressed_count”); pcount = (pcount) ? pcount + 1 : 1; Session.set(“pressed_count”, pcount); } }); Here we have a click event for our button with the counter ID, and the attached function just get’s the current count and increments it by one. To try it out, just save this file, and in the terminal window while still in the project’s directory, type meteor to restart the web server. Try clicking the button a few times, and if all went well the text should be updated with an incrementing value. Resources for Article: Further resources on this subject: Meteor.js JavaScript Framework: Why Meteor Rocks! [Article] Applying Special Effects in 3D Game Development with Microsoft Silverlight 3: Part 2 [Article] YUI Test [Article]
Read more
  • 0
  • 0
  • 1459
article-image-setting-node
Packt
07 Aug 2013
10 min read
Save for later

Setting up Node

Packt
07 Aug 2013
10 min read
(For more resources related to this topic, see here.) System requirements Node runs on POSIX-like operating systems, the various UNIX derivatives (Solaris, and so on), or workalikes (Linux, Mac OS X, and so on), as well as on Microsoft Windows, thanks to the extensive assistance from Microsoft. Indeed, many of the Node built-in functions are direct corollaries to POSIX system calls. It can run on machines both large and small, including the tiny ARM devices such as the Raspberry Pi microscale embeddable computer for DIY software/hardware projects. Node is now available via package management systems, limiting the need to compile and install from source. Installing from source requires having a C compiler (such as GCC), and Python 2.7 (or later). If you plan to use encryption in your networking code you will also need the OpenSSL cryptographic library. The modern UNIX derivatives almost certainly come with these, and Node's configure script (see later when we download and configure the source) will detect their presence. If you should have to install them, Python is available at http://python.org and OpenSSL is available at http://openssl.org. Installing Node using package managers The preferred method for installing Node, now, is to use the versions available in package managers such as apt-get, or MacPorts. Package managers simplify your life by helping to maintain the current version of the software on your computer and ensuring to update dependent packages as necessary, all by typing a simple command such as apt-get update. Let's go over this first. Installing on Mac OS X with MacPorts The MacPorts project (http://www.macports.org/) has for years been packaging a long list of open source software packages for Mac OS X, and they have packaged Node. After you have installed MacPorts using the installer on their website, installing Node is pretty much this simple: $ sudo port search nodejs nodejs @0.10.6 (devel, net) Evented I/O for V8 JavaScript nodejs-devel @0.11.2 (devel, net) Evented I/O for V8 JavaScript Found 2 ports. -- npm @1.2.21 (devel) node package manager $ sudo port install nodejs npm .. long log of downloading and installing prerequisites and Node Installing on Mac OS X with Homebrew Homebrew is another open source software package manager for Mac OS X, which some say is the perfect replacement for MacPorts. It is available through their home page at http://mxcl.github.com/homebrew/. After installing Homebrew using the instructions on their website, using it to install Node is as simple as this: $ brew search node leafnode node $ brew install node ==> Downloading http://nodejs.org/dist/v0.10.7/node-v0.10.7.tar.gz ######################################################################## 100.0% ==> ./configure –prefix=/usr/local/Cellar/node/0.10.7 ==> make install ==> Caveats Homebrew installed npm. We recommend prepending the following path to your PATH environment variable to have npm-installed binaries picked up: /usr/local/share/npm/bin ==> Summary /usr/local/Cellar/node/0.10.7: 870 files, 16M, built in 21.9 minutes Installing on Linux from package management systems While it's still premature for Linux distributions or other operating systems to prepackage Node with their OS, that doesn't mean you cannot install it using the package managers. Instructions on the Node wiki currently list packaged versions of Node for Debian, Ubuntu, OpenSUSE, and Arch Linux. See: https://github.com/joyent/node/wiki/Installing-Node.js-via-package-manager For example, on Debian sid (unstable): # apt-get update # apt-get install nodejs # Documentation is great. And on Ubuntu: # sudo apt-get install python-software-properties # sudo add-apt-repository ppa:chris-lea/node.js # sudo apt-get update # sudo apt-get install nodejs npm We can expect in due course that the Linux distros and other operating systems will routinely bundle Node into the OS like they do with other languages today. Installing the Node distribution from nodejs.org The nodejs.org website offers prebuilt binaries for Windows, Mac OS X, Linux, and Solaris. You simply go to the website, click on the Install button, and run the installer. For systems with package managers, such as the ones we've just discussed, it's preferable to use that installation method. That's because you'll find it easier to stay up-to-date with the latest version. However, on Windows this method may be preferred. For Mac OS X, the installer is a PKG file giving the typical installation process. For Windows, the installer simply takes you through the typical install wizard process. Once finished with the installer, you have a command line tool with which to run Node programs. The pre-packaged installers are the simplest ways to install Node, for those systems for which they're available. Installing Node on Windows using Chocolatey Gallery Chocolatey Gallery is a package management system, built on top of NuGet. Using it requires a Windows machine modern enough to support the Powershell and the .NET Framework 4.0. Once you have Chocolatey Gallery (http://chocolatey.org/), installing Node is as simple as this: C:> cinst install nodejs Installing the StrongLoop Node distribution StrongLoop (http://strongloop.com) has put together a supported version of Node that is prepackaged with several useful tools. This is a Node distribution in the same sense in which Fedora or Ubuntu are Linux distributions. StrongLoop brings together several useful packages, some of which were written by StrongLoop. StrongLoop tests the packages together, and distributes installable bundles through their website. The packages in the distribution include Express, Passport, Mongoose, Socket.IO, Engine.IO, Async, and Request. We will use all of those modules in this book. To install, navigate to the company home page and click on the Products link. They offer downloads of precompiled packages for both RPM and Debian Linux systems, as well as Mac OS X and Windows. Simply download the appropriate bundle for your system. For the RPM bundle, type the following: $ sudo rpm -i bundle-file-name For the Debian bundle, type the following: $ sudo dpkg -i bundle-file-name The Windows or Mac bundles are the usual sort of installable packages for each system. Simply double-click on the installer bundle, and follow the instructions in the install wizard. Once StrongLoop Node is installed, it provides not only the nodeand npmcommands (we'll go over these in a few pages), but also the slnodecommand. That command offers a superset of the npmcommands, such as boilerplate code for modules, web applications, or command-line applications. Installing from source on POSIX-like systems Installing the pre-packaged Node distributions is currently the preferred installation method. However, installing Node from source is desirable in a few situations: It could let you optimize the compiler settings as desired It could let you cross-compile, say for an embedded ARM system You might need to keep multiple Node builds for testing You might be working on Node itself Now that you have the high-level view, let's get our hands dirty mucking around in some build scripts. The general process follows the usual configure, make, and makeinstallroutine that you may already have performed with other open source software packages. If not, don't worry, we'll guide you through the process. The official installation instructions are in the Node wiki at https://github.com/joyent/node/wiki/Installation. Installing prerequisites As noted a minute ago, there are three prerequisites, a C compiler, Python, and the OpenSSL libraries. The Node installation process checks for their presence and will fail if the C compiler or Python is not present. The specific method of installing these is dependent on your operating system. These commands will check for their presence: $ cc --version i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5666) (dot 3) Copyright (C) 2007 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ python Python 2.6.6 (r266:84292, Feb 15 2011, 01:35:25) [GCC 4.2.1 (Apple Inc. build 5664)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> Installing developer tools on Mac OS X The developer tools (such as GCC) are an optional installation on Mac OS X. There are two ways to get those tools, both of which are free. On the OS X installation DVD is a directory labeled Optional Installs, in which there is a package installer for—among other things—the developer tools, including Xcode. The other method is to download the latest copy of Xcode (for free) from http://developer.apple.com/xcode/. Most other POSIX-like systems, such as Linux, include a C compiler with the base system. Installing from source for all POSIX-like systems First, download the source from http://nodejs.org/download. One way to do this is with your browser, and another way is as follows: $ mkdir src $ cd src $ wget http://nodejs.org/dist/v0.10.7/node-v0.10.7.tar.gz $ tar xvfz node-v0.10.7.tar.gz $ cd node-v0.10.7 The next step is to configure the source so that it can be built. It is done with the typical sort of configure script and you can see its long list of options by running the following: $ ./configure –help. To cause the installation to land in your home directory, run it this way: $ ./configure –prefix=$HOME/node/0.10.7 ..output from configure If you want to install Node in a system-wide directory simply leave off the -prefixoption, and it will default to installing in /usr/local. After a moment it'll stop and more likely configure the source tree for installation in your chosen directory. If this doesn't succeed it will print a message about something that needs to be fixed. Once the configure script is satisfied, you can go on to the next step. With the configure script satisfied, compile the software: $ make .. a long log of compiler output is printed $ make install If you are installing into a system-wide directory do the last step this way instead: $ make $ sudo make install Once installed you should make sure to add the installation directory to your PATHvariable as follows: $ echo 'export PATH=$HOME/node/0.10.7/bin:${PATH}' >>~/.bashrc $ . ~/.bashrc For cshusers, use this syntax to make an exported environment variable: $ echo 'setenv PATH $HOME/node/0.10.7/bin:${PATH}' >>~/.cshrc $ source ~/.cshrc This should result in some directories like this: $ ls ~/node/0.10.7/ bin include lib share $ ls ~/node/0.10.7/bin node node-waf npm Maintaining multiple Node installs simultaneously Normally you won't have multiple versions of Node installed, and doing so adds complexity to your system. But if you are hacking on Node itself, or are testing against different Node releases, or any of several similar situations, you may want to have multiple Node installations. The method to do so is a simple variation on what we've already discussed. If you noticed during the instructions discussed earlier, the –prefixoption was used in a way that directly supports installing several Node versions side-by-side in the same directory: $ ./configure –prefix=$HOME/node/0.10.7 And: $ ./configure –prefix=/usr/local/node/0.10.7 This initial step determines the install directory. Clearly when Version 0.10.7, Version 0.12.15, or whichever version is released, you can change the install prefix to have the new version installed side-by-side with the previous versions. To switch between Node versions is simply a matter of changing the PATHvariable (on POSIX systems), as follows: $ export PATH=/usr/local/node/0.10.7/bin:${PATH} It starts to be a little tedious to maintain this after a while. For each release, you have to set up Node, npm, and any third-party modules you desire in your Node install; also the command shown to change your PATHis not quite optimal. Inventive programmers have created several version managers to make this easier by automatically setting up not only Node, but npmalso, and providing commands to change your PATHthe smart way: Node version manager: https://github.com/visionmedia/n Nodefront, aids in rapid frontend development: http://karthikv.github.io/nodefront/
Read more
  • 0
  • 0
  • 3130

article-image-developing-entity-metadata-wrappers
Packt
07 Aug 2013
8 min read
Save for later

Developing with Entity Metadata Wrappers

Packt
07 Aug 2013
8 min read
(For more resources related to this topic, see here.) Introducing entity metadata wrappers Entity metadata wrappers, or wrappers for brevity, are PHP wrapper classes for simplifying code that deals with entities. They abstract structure so that a developer can write code in a generic way when accessing entities and their properties. Wrappers also implement PHP iterator interfaces, making it easy to loop through all properties of an entity or all values of a multiple value property. The magic of wrappers is in their use of the following three classes: EntityStructureWrapper EntityListWrapper EntityValueWrapper The first has a subclass, EntityDrupalWrapper, and is the entity structure object that you'll deal with the most. Entity property values are either data, an array of values, or an array of entities. The EntityListWrapper class wraps an array of values or entities. As a result, generic code must inspect the value type before doing anything with a value, in order to prevent exceptions from being thrown. Creating an entity metadata wrapper object Let's take a look at two hypothetical entities that expose data from the following two database tables: ingredient recipe_ingredient The ingredient table has two fields: iid and name. The recipe_ingredient table has four fields: riid, iid , qty , and qty_unit. The schema would be as follows: Schema for ingredient and recipe_ingredient tables To load and wrap an ingredient entity with an iid of 1 and, we would use the following line of code: $wrapper = entity_metadata_wrapper('ingredient', 1); To load and wrap a recipe_ingredient entity with an riid of 1, we would use this line of code: $wrapper = entity_metadata_wrapper('recipe_ingredient', 1); Now that we have a wrapper, we can access the standard entity properties. Standard entity properties The first argument of the entity_metadata_wrapper function is the entity type, and the second argument is the entity identifier, which is the value of the entity's identifying property. Note, that it is not necessary to supply the bundle, as identifiers are properties of the entity type. When an entity is exposed to Drupal, the developer selects one of the database fields to be the entity's identifying property and another field to be the entity's label property. In our previous hypothetical example, a developer would declare iid as the identifying property and name as the label property of the ingredient entity. These two abstract properties, combined with the type property, are essential for making our code apply to multiple data structures that have different identifier fields. Notice how the phrase "type property" does not format the word "property"? That is not a typographical error. It is indicating to you that type is in fact the name of the property storing the entity's type. The other two, identifying property and label property are metadata in the entity declaration. The metadata is used by code to get the correct name for the properties on each entity in which the identifier and label are stored. To illustrate this, consider the following code snippet: $info = entity_get_info($entity_type);$key = isset($info['entity keys']['name']) ? $info['entity keys']['name'] : $info['entity keys']['id'];return isset($entity->$key) ? $entity->$key : NULL; Shown here is a snippet of the entity_id() function in the entity module. As you can see, the entity information is retrieved at the first highlight, then the identifying property name is retrieved from that information at the second highlight. That name is then used to retrieve the identifier from the entity. Note that it's possible to use a non-integer identifier, so remember to take that into account for any generic code. The label property can either be a database field name or a hook. The entity exposing developer can declare a hook that generates a label for their entity when the label is more complicated, such as what we would need for recipe_ingredient. For that, we would need to combine the qty, qty_unit, and the name properties of the referenced ingredient. Entity introspection In order to see the properties that an entity has, you can call the getPropertyInfo() method on the entity wrapper. This may save you time when debugging. You can have a look by sending it to devel module's dpm() function or var_dump: dpm($wrapper->getPropertyInfo());var_dump($wrapper->getPropertyInfo()); Using an entity metadata wrapper The standard operations for entities are CRUD: create, retrieve, update, and delete. Let's look at each of these operations in some example code. The code is part of the pde module's Drush file: sites/all/modules/pde/pde.drush.inc. Each CRUD operation is implemented in a Drush command, and the relevant code is given in the following subsections. Before each code example, there are two example command lines. The first shows you how to execute the Drush command for the operation. ; the second is the help command. Create Creation of entities is implemented in the drush_pde_entity_create function. Drush commands The following examples show the usage of the entity-create ( ec) Drush command and how to obtain help documentation for the command: $ drush ec ingredient '{"name": "Salt, pickling"}'$ drush help ec Code snippet $entity = entity_create($type, $data);// Can call $entity->save() here or wrap to play and save$wrapper = entity_metadata_wrapper($type, $entity);$wrapper->save(); In the highlighted lines we create an entity, wrap it, and then save it. The first line uses entity_create, to which we pass the entity type and an associative array having property names as keys and their values. The function returns an object that has Entity as its base class. The save() method does all the hard work of storing our entity in the database. No more calls to db_insert are needed! Whether you use the save() method on the wrapper or on the Entity object really depends on what you need to do before and after the save() method call. For example, if you need to plug values into fields before you save the entity, it's handy to use a wrapper. Retrieve The retrieving (reading) of entities is implemented in the drush_pde_print_entity() function. Drush commands The following examples show the usage of the entity-read (er) Drush command and how to obtain help documentation for the command. $ drush er ingredient 1$ drush help er Code snippet $header = ' Entity (' . $wrapper->type();$header .= ') - ID# '. $wrapper->getIdentifier().':';// equivalents: $wrapper->value()->entityType()// $wrapper->value()->identifier()$rows = array();foreach ($wrapper as $pkey => $property) { // $wrapper->$pkey === $property if (!($property instanceof EntityValueWrapper)) { $rows[$pkey] = $property->raw() . ' (' . $property->label() . ')'; } else { $rows[$pkey] = $property->value(); }} On the first highlighted line, we call the type() method of the wrapper, which returns the wrapped entity's type. The wrapped Entity object is returned by the value() method of the wrapper. Using wrappers gives us the wrapper benefits, and we can use the entity object directly! The second highlighted line calls the getIdentifier() method of the wrapper. This is the way in which you retrieve the entity's ID without knowing the identifying property name. We'll discuss more about the identifying property of an entity in a moment. Thanks to our wrapper object implementing the IteratorAggregate interface , we are able to use a foreach statement to iterate through all of the entity properties. Of course, it is also possible to access a single property by using its key. For example, to access the name property of our hypothetical ingredient entity, we would use $wrapper->name. The last three highlights are the raw(), label(), and value() method calls. The distinction between these is very important, and is as follows: raw(): This returns the property's value straight from the database. label(): This returns value of an entity's label property. For example, name. value(): This returns a property's wrapped data: either a value or another wrapper. Finally, the highlighted raw() and value() methods retrieve the property values for us. These methods are interchangeable when simple entities are used, as there's no difference between the storage value and property value. However, for complex properties such as dates, there is a difference. Therefore, as a rule of thumb, always use the value() method unless you absolutely need to retrieve the storage value. The example code is using the raw() method only so we that can explore it, and all remaining examples in this book will stick to the rule of thumb. I promise! Storage value: This is the value of a property in the underlying storage media. for example, database. Property value: This is the value of a property at the entity level after the value is converted from its storage value to something more pleasing. For example, date formatting of a Unix timestamp. Multi-valued properties need a quick mention here. Reading these is quite straightforward, as they are accessible as an array. You can use Array notation to get an element, and use a foreach to loop through them! The following is a hypothetical code snippet to illustrate this: $output = 'First property: ';$output .= $wrapper->property[0]->value();foreach ($wrapper->property as $vwrapper) { $output .= $vwrapper->value();} Summary This article delved into development using entity metadata wrappers for safe CRUD operations and entity introspection. Resources for Article: Further resources on this subject: Microsoft SQL Server 2008 R2 MDS: Creating and Using Models [Article] EJB 3 Entities [Article] ADO.NET Entity Framework [Article]
Read more
  • 0
  • 0
  • 3124
Modal Close icon
Modal Close icon