Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-installation-and-configuration-microsoft-content-management-server-part-1
Packt
31 Mar 2010
10 min read
Save for later

Installation And Configuration of Microsoft Content Management Server: Part 1

Packt
31 Mar 2010
10 min read
In this first article of the series we walk you through the installation and configuration of MCMS 2002 Service Pack 2 (SP2), along with SQL Server 2005 and Visual Studio 2005 on a single developer workstation. In addition, we will cover the changes to the SP2 development environment and a number of tips for working within it. This article assumes you are already familiar with the steps necessary to install MCMS 2002 SP1a as detailed in depth in the previous book, Building Websites with Microsoft Content Management Server from Packt Publishing, January 2005 (ISBN 1-904811-16-7). There are two approaches to setting up a development environment for SP2: upgrading from a previous SP1a installation, or starting from scratch and building a fresh installation including SP2. We will cover both approaches in this article. For example, we will be using Windows XP Professional SP2 as our development workstation. However, where there are significant differences for a Windows Server 2003 SP1 machine, those will be noted. All examples assume the logged-on user is a local machine administrator. Overview of MCMS 2002 Service Pack 2 As with other Microsoft Service Packs, one major purpose of SP2 is to provide an integrated installation for a large number of previously released hotfixes. SP2 will now be a prerequisite for any future hotfix releases. While many customers will view SP2 as a regular Service Pack, it also offers support for the latest development platform and tools from Microsoft, namely SQL Server 2005, .NET Framework 2.0 and ASP.NET 2.0, and Visual Studio 2005: SQL Server 2005: MCMS databases can be hosted by SQL Server 2005, offering numerous advantages in security, deployment, and most significantly, performance. .NET Framework 2.0 and ASP.NET 2.0: MCMS applications can be hosted within the .NET Framework 2.0 runtime, and take advantage of v2.0 language features as well as security and performance improvements. In addition, many of the new features of ASP.NET 2.0 such as master pages, themes, navigation, and Membership Providers can be used. This provides numerous opportunities to both refine and refactor MCMS applications, and is the primary focus. Visual Studio 2005: MCMS applications can be developed using Visual Studio 2005. One of the greatest advantages here is the use of the new HTML-editing and designer features in VS.NET along with improved developer productivity. If you wish, you can continue to use SQL Server 2000 for your MCMS applications. However, we recommend upgrading to SQL Server 2005 and will use it throughout the examples in this book. There are numerous versions or Stock Keeping Units (SKUs) of Visual Studio 2005, all of which are supported with SP2. Throughout the examples in this book, we will be using Visual Studio 2005 Professional Edition. Unfortunately, SP2 is not a cumulative service pack and therefore requires an existing installation of SP1a. Likewise, there is no slipstreamed distribution of SP2. The SP2 distribution is suitable for all editions of MCMS. Mainly due to the extremely fast preparation and release of SP2 following the Release to Manufacturing (RTM) of .NET 2.0, Visual Studio 2005, and SQL Server 2005, the Microsoft installation information (KB906145) isn’t particularly well documented and is somewhat confusing. Rest assured that the guidance in this article has been verified and tested for both installation scenarios covered. Obtaining MCMS Service Pack 2 MCMS SP2 can be downloaded from the following locations: English:http://www.microsoft.com/downloads/details.aspx?FamilyId=3DE1E8F0-D660-4A2B-8B14-0FCE961E56FB&displaylang=en French:http://www.microsoft.com/downloads/details.aspx?FamilyId=3DE1E8F0-D660-4A2B-8B14-0FCE961E56FB&displaylang=fr German:http://www.microsoft.com/downloads/details.aspx?FamilyId=3DE1E8F0-D660-4A2B-8B14-0FCE961E56FB&displaylang=de Japanese:http://www.microsoft.com/downloads/details.aspx?FamilyId=3DE1E8F0-D660-4A2B-8B14-0FCE961E56FB&displaylang=ja Installation Approach We cover both an in-place upgrade to SP2 and a fresh installation in this chapter. Which approach you take is down to your specific requirements and your current, if any, MCMS installation. If you wish to perform a fresh install, skip ahead to the Fresh Installation of Microsoft Content Management Server 2002 Service Pack 2 section, later in this article Upgrading to Microsoft Content Management Server 2002 Service Pack 2 This section details the steps required to upgrade an existing installation of MCMS SP1a, which includes the Developer Tools for Visual Studio.NET 2003 component. The outline process for an upgrade is as follows: Install Visual Studio 2005. Install MCMS 2002 Service Pack 2. Configure the development environment. (Optional) Prepare the MCMS database for SQL Server 2005. (Optional) Upgrade SQL Server. (Optional) Install SQL Server 2005 Service Pack 1. We will perform all steps while logged on as a local machine administrator. Installing Visual Studio 2005 Visual Studio 2005 can be installed side by side with Visual Studio.NET 2003. Once we have completed the upgrade, we can remove Visual Studio.NET 2003 if we wish to only develop MCMS applications using SP2 and ASP.NET 2.0. Insert the Visual Studio 2005 DVD, and on the splash screen, click Install Visual Studio 2005. On the Welcome to the Microsoft Visual Studio 2005 installation wizard page, click Next. On the Start Page, select the I accept the terms of the License Agreement checkbox, enter your Product Key and Name, and click Next. On the Options Page, select the Custom radio button, enter your desired Product install path, and click Next. On the second Options page, select the Visual C# and Visual Web Developer checkboxes within the Language Tools section, and the Tools checkbox within the .NET Framework SDK section. Click Install. Feel free to install any additional features you may wish to use. The above selections are all that’s required to follow the examples in this book. Wait (or take a coffee break) while Visual Studio 2005 is installed. When the Finish Page appears, click Finish. From the Visual Studio 2005 Setup dialog, you can choose to install the Product Documentation (MSDN Library) if desired. From the Visual Studio 2005 Setup dialog, click Check for Visual Studio Service Releases to install any updates that may be available. Click Exit. Installing MCMS 2002 Service Pack 2 Next, we will install MCMS Service Pack 2. From the Start Menu, click Run. In the Open textbox, enter IISRESET /STOP and click OK. Wait while the IIS Services are stopped. Double-click the SP2 installation package. On the Welcome to Microsoft Content Management Server 2002 SP2 Installation Wizard page, click Next. Select the I accept the terms of this license agreement radio button, and click Next. On the ready to begin the installation page, click Next. Wait while Service Pack 2 is installed. During installation you may be prompted for the MCMS 2002 SP1a CD-ROM. Once The Installation Wizard has completed page, click Finish. If prompted, click Yes on the dialog to restart your computer, which will complete the installation. Otherwise, from the Start Menu, click Run. In the Open textbox, enter IISRESET /START and click OK to restart the IIS services. Stopping IIS prior to the installation of SP2 avoids potential problems with replacing locked files during the installation, and can prevent the requirement to reboot. Configuring the Development Environment Before continuing, a few additional steps are required to configure the development environment. We will: Configure the shortcut that opens Site Manager to bypass the Connect To dialog. Install the MCMS website and item templates in Visual Studio. Site Manager Shortcut During the installation of SP2 the Site Manager Start-menu shortcut will be overwritten. To configure Site Manager to bypass the Connect To dialog, take the following steps: Select Start | All Programs | Microsoft Content Management Server. Right-click the Site Manager shortcut and click Properties. In the Target textbox, replace"C:Program FilesMicrosoft Content Management ServerClientNRClient.exe" http:///NR/System/ClientUI/login.aspwith"C:Program FilesMicrosoft Content Management ServerClientNRClient.exe" http://localhost/NR/System/ClientUI/login.asp. Click OK. It is possible to configure many different Site Manager shortcuts pointing to different MCMS entry points. However, for this book we will only use the entry point on localhost, which is the only supported configuration for MCMS development. Visual Studio Templates The installation of MCMS Service Pack 2 automatically registers the MCMS developer tools such as MCMS Template Explorer in Visual Studio 2005. However, before we can create MCMS applications with Visual Studio, we need to make the website and item templates available. Select Start | All Programs | Microsoft Visual Studio 2005 | Visual Studio Tools | Visual Studio 2005 Command Prompt. Execute the following commands, replacing MCMS_INSTALL_PATH with the install location of MCMS (usually C:Program FilesMicrosoft Content Management Server) and PATH_TO_MY_DOCUMENTS_FOLDER with the location of your My Documents folder: xcopy "MCMS_INSTALL_PATHDevToolsNewProjectWizards80Visual WebDeveloper" "PATH_TO_MY_DOCUMENTS_FOLDERVisual Studio 2005TemplatesProjectTemplatesVisual Web Developer"/E xcopy "MCMS_INSTALL_PATHDevToolsNewItemWizards80Visual WebDeveloper" "PATH_TO_MY_DOCUMENTS_FOLDERVisual Studio 2005TemplatesItemTemplatesVisual Web Developer"/E Execute the following command to register the templates with VisualStudio 2005: devenv /setup Close the command prompt. This completes the steps to upgrade to SP2, and our environment is now ready for development! We can test our installation by viewing the version number in the SCA, connecting with Site Manager, or by using the Web Author. Of course, any existing MCMS web applications will at this time still be hosted by.NET Framework v1.1. It is not necessary at this stage to register ASP.NET as detailed in the Microsoft Installation Instructions (KB 906145). This registration was performed by the Visual Studio 2005 installer. Additionally it is unnecessary to configure IIS to use ASP.NET 2.0 using the Internet Information Services Snap-In, as Visual Studio 2005 automatically sets this option on each MCMS website application created.However, if you are installing on Windows Server 2003, you must configure the Virtual Website root and the MCMS Virtual Directory to use ASP.NET 2.0, as it is not possible to use two versions of ASP.NET within the same Application Pool. The ActiveX controls that are part of HtmlPlaceholderControl are updated with SP2. Therefore you will be prompted to install this control when first switching to edit mode.If you have pre-installed the controls using regsvr32 or Group Policy as detailed at http://download.microsoft.com/download/4/2/5/4250f79a-c3a1-4003-9272-2404e92bb76a/MCMS+2002+-+(complete)+FAQ.htm#51C0CE4B-FC57-454C-BAAE-12C09421B57B, you might also be prompted, and you will need to update your distribution for the controls. At this stage you can also choose to upgrade SQL Server or move forward. Preparing the MCMS Database for SQL Server 2005 Before upgrading our SQL Server installation to SQL Server 2005, we need to prepare the MCMS database so that it is compatible with SQL Server 2005. Request the following MCMS hotfix from Microsoft:http://support.microsoft.com/?kbid=913401. Run the hotfix executable to extract the files to a local folder, e.g. c:913401. Copy both of the files (_dca.ini and _sp1aTosp2upgrade.sql) to the MCMS SQL install folder (typically c:Program FilesMicrosoft Content Management ServerServerSetup FilesSQL Install). Overwrite the existing files. Delete the temporary folder. Select Start | Microsoft Content Management Server | Data Configuration Application. On the splash screen, click Next. In the Stop Service? dialog, click Yes. On the Select MCMS Database page, click Next. In the Upgrade Required dialog, click Yes. On the Upgrade Database page, click Next. In the Add an Administrator dialog, click No. On the Database Configuration Application page, uncheck the Launch the SCA Now checkbox and click Finish.
Read more
  • 0
  • 0
  • 2364

article-image-installing-mahara
Packt
19 Feb 2010
7 min read
Save for later

Installing Mahara

Packt
19 Feb 2010
7 min read
What will you need? Before you can install Mahara, you will need to have access to a Linux server. It may be that you run Linux on a laptop or desktop at home or that your company or institution has its own Linux servers, in which case, great! If not, there are many hosting services available on the Internet, which will enable you to access a Linux server and therefore run Mahara. It is important that you get a server to which you have root access. It is also important that you set your server up with the following features: Database: Mahara must have a database to work. The databases supported are PostgreSQL Version 8.1 or later and MySQL Version 5.0.25 or later. The Mahara developers recommend that you use PostgreSQL, if possible, but for most installations, MySQL will work just as well. PHP: Mahara requires PHP Version 5.1.3 or later. Web Server: The preferred web server is Apache. PHP extensions: Compulsory Extensions: GD, JSON, cURL, libxml, SimpleXML, Session, pgSQL or Mysqli, EXIF, OpenSSL or XML-RCP (for networking support) Optional Extension: Imagick Ask your resident IT expert about the features listed above if you don't understand what they mean. A quick way to install some of the software listed above is to use the apt-get install command if you are using the Ubuntu/Debian Linux systems. See http://www.debian.org/doc/manuals/apt-howto/ to find out more. Downloading Mahara It's time for action. Let's start by seeing how easy it is for us to get a copy of Mahara for ourselves, and the best part is... it's free! Time for action – downloading Mahara Go to http://mahara.org. Click on the download button on the Mahara home page. The button will be labeled with the name of the current version of Mahara: You will now see a web page that lists all the various versions of Mahara, both previous and forthcoming versions, in Alpha and Beta. Choose the most recent version from the list in the format you prefer. We recommend that you use the .tar.gz type because it is faster to download than .zip. You will be asked if you would like to open or save the file. Select Save File, and click OK. That's all there is to it. Go to your Internet downloads folder. In there, you should see your newly downloaded Mahara package. What Just Happened? You have just taken your first step on the road to installing Mahara. We have seen the website we have to go to for downloading the most recent version and learned how to download the package in the format we prefer. Using the command line The best way of installing and administering your Mahara is to use the command line. This is a way of writing text commands to perform specific tasks, rather than having to use a graphical user interface. There are many things you can do from the command line, from common tasks such as copying and deleting files to more advanced ones such as downloading and installing software from the Internet. A lot of the things we will be doing in this section assume that you will have Secure Shell Access to your server through the terminal command line. If you have a Linux or a Mac computer, you can use the terminal on your machine to SSH into your web server. Windows users can achieve the same functionality by downloading a free terminal client called PuTTY from http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html. Speak to your resident IT expert for more information on how to use the terminal, or see http://www.physics.ubc.ca/mbelab/computer/linuxintro/html/for an introduction to the Linux command line. For now, let's just learn how to get the contents of our downloaded package into the correct place on our server. Time for action – creating your Mahara file structure Copy the mahara- 1.2.0.tar.gz package you downloaded into your home directory on your web server. If you are copying the file to the server from your own computer, you can do this using the scp command (on Linux or Mac): scp mahara-1.2.0.tar.gz servername:pathtohomedirectory On Windows, you may prefer to use a free FTP utility such as FileZilla (http://filezilla-project.org/). Unpack the contents of the Mahara package on the Linux server. On the terminal, you can do this using the tar command: tar xvzf mahara-1.2.0.tar.gz You will now see a new folder called mahara-1.2.0; you will need to rename this to public. To do this on the terminal, you can use the mv command: mv mahara-1.2.0 public That's it! The Mahara code is now in place. What Just Happened? You just learned where to copy the Mahara package on your server and how to extract its contents. Creating the database A lot of the information created in your Mahara will be stored in a database. Mahara offers support for both PostgreSQL and MySQL databases. However we prefer to use PostgreSQL. If you are interested, see http://mahara.org/interaction/forum/topic.php?id=302 for a discussion on why PostgreSQL is preferred to MySQL. The way you create your database will depend on who you have chosen to host your Mahara. Sometimes, your web host will provide a graphical user interface to access your server database. Get in touch with your local IT expert to find out how to do this. However, for smaller Mahara installations, we often prefer to use something like phpPgAdmin, which is a software application that allows you to manage PostgreSQL databases over the Internet. See http://phppgadmin.sourceforge.ne for more information on setting up phpPgAdmin on your server. Also see,http://www.phpmyadmin.net/ for phpMyAdmin which works in a very similar way to phpPgAdmin but operates on a MySQL database. For now, let's get on with creating a Postgres database using our phpPgAdmin panel. Time for action – creating the Mahara database Open up your phpPgAdmin panel from your Internet browser and log in. The username is hopefully postgres. Contact your admin if you are unsure of the database password or how to locate the phyPgAdmin panel. On the front page there is a section that invites you to create database, click there. Give your database a relevant name such as mysite_Mahara. Make sure you select the UTF8 collation from the drop-down box. Finally, click Create. If you want to, it is a good idea to have a new user for each database you create. Use phpPgAdmin to create a new user. That's it, you're done! What Just Happened? We just created the database for our Mahara installation using the open source phpPgAdmin tool available for Linux. Another way to create the database on your server is to use the database command line tool. Have a go hero – using the command line to create your database Using the command line is a much more elegant way to create the database and quicker once you get the hang of it. Why not have a go at creating the database using the command line? For instructions on how to do this see the database section of the Mahara installation guide:http://wiki.mahara.org/System_Administrator%27s_Guide/Installing_Mahara Setting up the data directory Most of the data that is created in your Mahara is stored in the database. However, all the files that are uploaded by your users, such as their personal photos or documents, need to be stored in a separate place. This is where the data directory comes in. The data directory is simply a folder that holds all of the "stuff" belonging to your users. Everything is kept safe by the data directory being outside of the home directory. This set up also makes it easy for you to migrate your Mahara to another server at some point in the future. The data directory is often referred to as the dataroot.
Read more
  • 0
  • 0
  • 2362

article-image-web-controls-dotnetnuke
Packt
08 Oct 2010
7 min read
Save for later

Web Controls in DotNetNuke

Packt
08 Oct 2010
7 min read
DotNetNuke 5.4 Cookbook Over 100 recipes for installing, configuring, and customizing your own website with the DotNetNuke CMS Create and customize your own DotNetNuke website with blog, forums, newsletters, wikis and many more popular website features Learn custom module development and rich content management with sample code and tips Provides samples of styling and skinning a DotNetNuke portal Offers advanced programming tips combining DNN with AJAX and JQuery Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible Read more about this book (For more resources on DotNetNuke, see here.) Introduction One of the powerful features of DNN is the variety of flexible and reusable controls that are available for custom module development. These include many of the web controls seen on the core DNN pages. In this article, we will see how to add these web controls to custom modules and tie them to the tables in the database. In general, using these controls requires four simple steps: Adding the control code to the View or Edit .ascx file Adding a new property to the info object that will supply the values for the control Binding the control to the values Capturing the value from the control and saving to the database (if Edit page) Adding web controls to your Toolbox If you frequently use the visual editor in the development tool to layout your pages, this short recipe will show you how to add the DNN web controls to the Toolbox. How to do it… Launch the Development Tool. Change the editor to Design mode. Make sure the toolbox is displayed. Right-click on the toolbox and select Choose Items…. Click on the Browse button. Navigate to the /bin folder within the DNN source (DNNSource/website/bin). Select the DotNetNuke.Webcontrols.dll and click on Open. Make sure the DNN controls are checked and click on OK. The web controls will now appear under the General section of the toolbox when you edit your code. Next, we need to add a reference to DotNetNuke.WebUtility.dll. Right-click on the Employee project in the Solution Explorer and select Add Reference…. In the pop-up dialog, click on the Browse tab and navigate to the folder holding the DNN source files (for example, My DocumentsDNNSourceWebsitebin). Select the file DotNetNuke.WebUtility.dll and click on OK. Showing an e-mail link in a Datagrid The Datagrid control is perfect for showing records from the database in a neatly formatted table. But the Datagrid can show many other types of information and in this recipe we will see how to display an e-mail hyperlink in a column. Getting ready In this recipe, we will extend the Datagrid. In this recipe we are using a function to generate an e-mail address for our example. This keeps the recipe simple, but isn't really practical. In a real production environment you would store this in the database as part of the Employee table. How to do it... Launch the Development Tool and load the Employee project. Double-click to open the ViewEmployee.ascx file. Locate the Datagrid in the code and add a new column just after the Salary column: <dnn:textcolumn datafield="Salary" headertext="Salary"/> <asp:TemplateColumn HeaderText="Email Contact"> <itemtemplate> <asp:HyperLink id="hlEmail" NavigateUrl='<%# "mailto:" & DataBinder.Eval (Container.DataItem,"ContactEmail") %>' Text='<%# DataBinder.Eval (Container.DataItem,"ContactEmail") %>' Target="_new" runat="server" /> </ItemTemplate> </asp:TemplateColumn> </Columns> </asp:datagrid> Next, open the EmployeeInfo.vb file. Find the Public Properties section and add the read-only property EmailAddress to provide an e-mail address constructed from the employee name: ' public properties Public ReadOnly Property EmailAddress() As String Get Return _EmpFirstName.Substring(0, 1) + _EmpLastName + "@yourcompany.com" End Get End Property Select Save All from the File menu. To check the results, build and deploy the module to a development portal for testing. Go to the ACME Employee page to see the list of employees. The new e-mail hyperlink will appear on the right-hand side. (Move the mouse over the image to enlarge.) How it works... In this recipe we saw the tasks to show an e-mail hyperlink in a Datagrid control: We took the Datagrid control and added a new template column holding an e-mail hyperlink control We added a new property to the EmployeeInfo object to provide an e-mail address for the Datagrid Showing checkboxes in a Datagrid An element that is useful to display in a Datagrid is a checkbox-like image to indicate the status of the database record. These are not functioning checkboxes but rather a visual indicator showing the data to be true or false. The control works by having two images, one with a checkmark that is shown when the value is true. The other is an unchecked image that is shown when the value is false. This recipe will work with any image indicating true or false. Checkbox-like images are used in other DNN modules so they are familiar to users, but you can experiment with your own images as well. This recipe has two basic steps: We will create a new property of the EmployeeInfo object called NewHire. This property checks the date of hire from the database and returns true if the employee was hired less than 30 days ago. We will add a new column to the Datagrid that evaluates the NewHire property and shows one image if the NewHire is true and another image if the NewHire is false. Getting ready In this recipe we will extend the Datagrid. How to do it... Launch the Development Tool and load the Employee project. Double-click to open the ViewEmployee.ascx file. The first step is to add a new column to the Datagrid that will show the checkbox images. We will use the Eval function to check the NewHire function. Locate the Datagrid and add a new column just after the Salary column: <dnn:textcolumn datafield="Salary" headertext="Salary"/> <asp:TemplateColumn HeaderText="New Hire"> <itemtemplate> <asp:Image Runat="server" ID="imgApproved" ImageUrl="~/images/checked.gif" Visible='<%# DataBinder.Eval (Container.DataItem,"NewHire")="1" %>'/> <asp:Image Runat="server" ID="imgNotApproved" ImageUrl="~/images/unchecked.gif" Visible='<%# DataBinder.Eval (Container.DataItem,"NewHire")="0" %>'/> </ItemTemplate> </asp:TemplateColumn> </Columns> </asp:datagrid> Next, open the EmployeeInfo.vb file. Find the Public Properties section and add the read-only property NewHire that returns true or false if the employee was hired in the last 30 days: ' public properties Public ReadOnly Property NewHire() As Boolean Get Return (Today() - _HireDate).Days < 30 End Get End Property Select Save All from the File menu. To check the results, build and deploy the module to a development portal for testing. Go to the ACME Employee page to see the list of employees. The new checkbox will appear on the right-hand side. Although you cannot click on these checkboxes, they do provide a clear and easy to understand visual status for the records. How it works... In this recipe we saw the tasks to show checkbox images in a Datagrid control: We took the Datagrid control and added a new template column holding two image controls, one checked and the other unchecked. We added a new property to the EmployeeInfo object that returns true or false depending on the database record. We bound the property to the control so that if the property was true then the checked image was displayed. If the property was false the unchecked image was displayed.  
Read more
  • 0
  • 0
  • 2362

article-image-working-remote-data
Packt
20 Aug 2013
4 min read
Save for later

Working with remote data

Packt
20 Aug 2013
4 min read
(For more resources related to this topic, see here.) Getting ready Create a new document in your editor. How to do it... Copy the following code into your new document: <!DOCTYPE html> <html> <head> <title>Kendo UI Grid How-to</title> <link rel="stylesheet" type="text/css" href="kendo/styles/kendo.common.min.css"> <link rel="stylesheet" type="text/css" href="kendo/styles/kendo.default.min.css"> <script src = "kendo/js/jquery.min.js"></script> <script src = "kendo/js/kendo.web.min.js"></script> </head> <body> <h3 style="color:#4f90ea;">Exercise 12- Working with Remote Data</h3> <p><a href="index.html">Home</a></p> <script type="text/javascript"> $(document).ready(function () { var serviceURL = "http://gonautilus.com/kendogen/KENDO.cfc?method="; var myDataSource = new kendo.data.DataSource({ transport: { read: { url: serviceURL + "getArt", dataType: "JSONP" } }, pageSize: 20, schema: { model: { id: "ARTISTID", fields: { ARTID: { type: "number" }, ARTISTID: { type: "number" }, ARTNAME: { type: "string" }, DESCRIPTION: { type: "CLOB" }, PRICE: { type: "decimal" }, LARGEIMAGE: { type: "string" }, MEDIAID: { type: "number" }, ISSOLD: { type: "boolean" } } } } } ); $("#myGrid").kendoGrid({ dataSource: myDataSource, pageable: true, sortable: true, columns: [ { field: "ARTID", title: "Art ID"}, { field: "ARTISTID", title: "Artist ID"}, { field: "ARTNAME", title: "Art Name"}, { field: "DESCRIPTION", title: "Description"}, { field: "PRICE", title: "Price", template: '#= kendo.toString(PRICE,"c") #'}, { field: "LARGEIMAGE", title: "Large Image"}, { field: "MEDIAID", title: "Media ID"}, { field: "ISSOLD", title: "Sold"}] } ); } ); </script> <div id="myGrid"></div> </body> </html> How it works... This example shows you how to access a JSONP remote datasource. JSONP allows you to work with cross-domain remote datasources. The JSONP format is like JSON except it adds padding, which is what the "P" in JSONP stands for. The padding can be seen if you look at the result of the AJAX call being made by the Kendo Grid. It simply responds back with the callback argument that is passed and wraps the JSON in parentheses. You'll notice that we created a serviceURL variable that points to the service we are calling to return our data. On line 19, you'll see that we are calling the getArt method and specifying the value of dataType as JSONP. Everything else should look familiar. There's more... Generally, the most common format used for remote data is JavaScript Object Notation (JSON). You'll find several examples of using ODATA on the Kendo UI demo website. You'll also find examples of performing create, update, and delete operations on that site. Outputting JSON with ASP MVC In an ASP MVC or ASP.NET application, you'll want to set up your datasource like the following example. ASP has certain security requirements that force you to use POST instead of the default GET request when making AJAX calls. ASP also requires that you explicitly define the value of contentType as application/json when requesting JSON. By default, when you create a service as ASP MVC that has JsonResultAction, ASP will nest the JSON data in an element named d: var dataSource = new kendo.data.DataSource({ transport: { read: { type: "POST", url: serviceURL, dataType: "JSON", contentType: "application/json", data: serverData }, parameterMap: function (data, operation) { return kendo.stringify(data); } }, schema: { data: "d" } }); Summary This article discussed about how to work with aggregates with the help of an example of counting the number of items in a column. Resources for Article: Further resources on this subject: Constructing and Evaluating Your Design Solution [Article] Data Manipulation in Silverlight 4 Data Grid [Article] Quick start – creating your first grid [Article]
Read more
  • 0
  • 0
  • 2361

article-image-using-client-methods
Packt
26 May 2015
14 min read
Save for later

Using Client Methods

Packt
26 May 2015
14 min read
In this article by Isaac Strack, author of the book Meteor Cookbook, we will cover the following recipe: Using the HTML FileReader to upload images (For more resources related to this topic, see here.) Using the HTML FileReader to upload images Adding files via a web application is a pretty standard functionality nowadays. That doesn't mean that it's easy to do, programmatically. New browsers support Web APIs to make our job easier, and a lot of quality libraries/packages exist to help us navigate the file reading/uploading forests, but, being the coding lumberjacks that we are, we like to know how to roll our own! In this recipe, you will learn how to read and upload image files to a Meteor server. Getting ready We will be using a default project installation, with client, server, and both folders, and with the addition of a special folder for storing images. In a terminal window, navigate to where you would like your project to reside, and execute the following commands: $ meteor create imageupload $ cd imageupload $ rm imageupload.* $ mkdir client $ mkdir server $ mkdir both $ mkdir .images Note the dot in the .images folder. This is really important because we don't want the Meteor application to automatically refresh every time we add an image to the server! By creating the images folder as .images, we are hiding it from the eye-of-Sauron-like monitoring system built into Meteor, because folders starting with a period are "invisible" to Linux or Unix. Let's also take care of the additional Atmosphere packages we'll need. In the same terminal window, execute the following commands: $ meteor add twbs:bootstrap $ meteor add voodoohop:masonrify We're now ready to get started on building our image upload application. How to do it… We want to display the images we upload, so we'll be using a layout package (voodoohop:masonrify) for display purposes. We will also initiate uploads via drag and drop, to cut down on UI components. Lastly, we'll be relying on an npm module to make the file upload much easier. Let's break this down into a few steps, starting with the user interface. In the [project root]/client folder, create a file called imageupload.html and add the following templates and template inclusions: <body> <h1>Images!</h1> {{> display}} {{> dropzone}} </body>   <template name="display"> {{#masonryContainer    columnWidth=50    transitionDuration="0.2s"    id="MasonryContainer" }} {{#each imgs}} {{> img}} {{/each}} {{/masonryContainer}} </template>   <template name="dropzone"> <div id="dropzone" class="{{dropcloth}}">Drag images here...</div> </template>   <template name="img"> {{#masonryElement "MasonryContainer"}} <img src="{{src}}"    class="display-image"    style="width:{{calcWidth}}"/> {{/masonryElement}} </template> We want to add just a little bit of styling, including an "active" state for our drop zone, so that we know when we are safe to drop files onto the page. In your [project root]/client/ folder, create a new style.css file and enter the following CSS style directives: body { background-color: #f5f0e5; font-size: 2rem;   }   div#dropzone { position: fixed; bottom:5px; left:2%; width:96%; height:100px; margin: auto auto; line-height: 100px; text-align: center; border: 3px dashed #7f898d; color: #7f8c8d; background-color: rgba(210,200,200,0.5); }   div#dropzone.active { border-color: #27ae60; color: #27ae60; background-color: rgba(39, 174, 96,0.3); }   img.display-image { max-width: 400px; } We now want to create an Images collection to store references to our uploaded image files. To do this, we will be relying on EJSON. EJSON is Meteor's extended version of JSON, which allows us to quickly transfer binary files from the client to the server. In your [project root]/both/ folder, create a file called imgFile.js and add the MongoDB collection by adding the following line: Images = new Mongo.Collection('images'); We will now create the imgFile object, and declare an EJSON type of imgFile to be used on both the client and the server. After the preceding Images declaration, enter the following code: imgFile = function (d) { d = d || {}; this.name = d.name; this.type = d.type; this.source = d.source; this.size = d.size; }; To properly initialize imgFile as an EJSON type, we need to implement the fromJSONValue(), prototype(), and toJSONValue() methods. We will then declare imgFile as an EJSON type using the EJSON.addType() method. Add the following code just below the imgFile function declaration: imgFile.fromJSONValue = function (d) { return new imgFile({    name: d.name,    type: d.type,    source: EJSON.fromJSONValue(d.source),    size: d.size }); };   imgFile.prototype = { constructor: imgFile,   typeName: function () {    return 'imgFile' }, equals: function (comp) {    return (this.name == comp.name &&    this.size == comp.size); }, clone: function () {    return new imgFile({      name: this.name,      type: this.type,      source: this.source,      size: this.size    }); }, toJSONValue: function () {    return {      name: this.name,      type: this.type,      source: EJSON.toJSONValue(this.source),      size: this.size    }; } };   EJSON.addType('imgFile', imgFile.fromJSONValue); The EJSON code used in this recipe is heavily inspired by Chris Mather's Evented Mind file upload tutorials. We recommend checking out his site and learning even more about file uploading at https://www.eventedmind.com. Even though it's usually cleaner to put client-specific and server-specific code in separate files, because the code is related to the imgFile code we just entered, we are going to put it all in the same file. Just below the EJSON.addType() function call in the preceding step, add the following Meteor.isClient and Meteor.isServer code: if (Meteor.isClient){ _.extend(imgFile.prototype, {    read: function (f, callback) {        var fReader = new FileReader;      var self = this;      callback = callback || function () {};        fReader.onload = function() {        self.source = new Uint8Array(fReader.result);        callback(null,self);      };        fReader.onerror = function() {        callback(fReader.error);      };        fReader.readAsArrayBuffer(f);    } }); _.extend (imgFile, {    read: function (f, callback){      return new imgFile(f).read(f,callback);    } }); };   if (Meteor.isServer){ var fs = Npm.require('fs'); var path = Npm.require('path'); _.extend(imgFile.prototype, {    save: function(dirPath, options){      var fPath = path.join(process.env.PWD,dirPath,this.name);      var imgBuffer = new Buffer(this.source);      fs.writeFileSync(fPath, imgBuffer, options);    } }); }; Next, we will add some Images collection insert helpers. We will provide the ability to add either references (URIs) to images, or to upload files into our .images folder on the server. To do this, we need some Meteor.methods. In the [project root]/server/ folder, create an imageupload-server.js file, and enter the following code: Meteor.methods({ addURL : function(uri){    Images.insert({src:uri}); }, uploadIMG : function(iFile){    iFile.save('.images',{});    Images.insert({src:'images/'     +iFile.name}); } }); We now need to establish the code to process/serve images from the .images folder. We need to circumvent Meteor's normal asset serving capabilities for anything found in the (hidden) .images folder. To do this, we will use the fs npm module, and redirect any content requests accessing the Images/ folder address to the actual .images folder found on the server. Just after the Meteor.methods block entered in the preceding step, add the following WebApp.connectHandlers.use() function code: var fs = Npm.require('fs'); WebApp.connectHandlers.use(function(req, res, next) { var re = /^/images/(.*)$/.exec(req.url); if (re !== null) {    var filePath = process.env.PWD     + '/.images/'+ re[1];    var data = fs.readFileSync(filePath, data);    res.writeHead(200, {      'Content-Type': 'image'    });    res.write(data);    res.end(); } else {    next(); } }); Our images display template is entirely dependent on the Images collection, so we need to add the appropriate reactive Template.helpers function on the client side. In your [project root]/client/ folder, create an imageupload-client.js file, and add the following code: Template.display.helpers({ imgs: function () {    return Images.find(); } }); If we add pictures we don't like and want to remove them quickly, the easiest way to do that is by double clicking on a picture. So, let's add the code for doing that just below the Template.helpers method in the same file: Template.display.events({ 'dblclick .display-image': function (e) {    Images.remove({      _id: this._id    }); } }); Now for the fun stuff. We're going to add drag and drop visual feedback cues, so that whenever we drag anything over our drop zone, the drop zone will provide visual feedback to the user. Likewise, once we move away from the zone, or actually drop items, the drop zone should return to normal. We will accomplish this through a Session variable, which modifies the CSS class in the div.dropzone element, whenever it is changed. At the bottom of the imageupload-client.js file, add the following Template.helpers and Template.events code blocks: Template.dropzone.helpers({ dropcloth: function () {    return Session.get('dropcloth'); } });   Template.dropzone.events({ 'dragover #dropzone': function (e) {    e.preventDefault();    Session.set('dropcloth', 'active'); }, 'dragleave #dropzone': function (e) {    e.preventDefault();    Session.set('dropcloth');   } }); The last task is to evaluate what has been dropped in to our page drop zone. If what's been dropped is simply a URI, we will add it to the Images collection as is. If it's a file, we will store it, create a URI to it, and then append it to the Images collection. In the imageupload-client.js file, just before the final closing curly bracket inside the Template.dropzone.events code block, add the following event handler logic: 'dragleave #dropzone': function (e) {    ... }, 'drop #dropzone': function (e) {    e.preventDefault();    Session.set('dropcloth');      var files = e.originalEvent.dataTransfer.files;    var images = $(e.originalEvent.dataTransfer.getData('text/html')).find('img');    var fragment = _.findWhere(e.originalEvent.dataTransfer.items, {      type: 'text/html'    });    if (files.length) {      _.each(files, function (e, i, l) {        imgFile.read(e, function (error, imgfile) {          Meteor.call('uploadIMG', imgfile, function (e) {            if (e) {              console.log(e.message);            }          });        })      });    } else if (images.length) {      _.each(images, function (e, i, l) {        Meteor.call('addURL', $(e).attr('src'));      });    } else if (fragment) {      fragment.getAsString(function (e) {        var frags = $(e);        var img = _.find(frags, function (e) {          return e.hasAttribute('src');        });        if (img) Meteor.call('addURL', img.src);      });    }   } }); Save all your changes and open a browser to http://localhost:3000. Find some pictures from any web site, and drag and drop them in to the drop zone. As you drag and drop the images, the images will appear immediately on your web page, as shown in the following screenshot: As you drag and drop the dinosaur images in to the drop zone, they will be uploaded as shown in the following screenshot: Similarly, dragging and dropping actual files will just as quickly upload and then display images, as shown in the following screenshot: As the files are dropped, they are uploaded and saved in the .images/ folder: How it works… There are a lot of moving parts to the code we just created, but we can refine it down to four areas. First, we created a new imgFile object, complete with the internal functions added via the Object.prototype = {…} declaration. The functions added here ( typeName, equals, clone, toJSONValue and fromJSONValue) are primarily used to allow the imgFile object to be serialized and deserialized properly on the client and the server. Normally, this isn't needed, as we can just insert into Mongo Collections directly, but in this case it is needed because we want to use the FileReader and Node fs packages on the client and server respectively to directly load and save image files, rather than write them to a collection. Second, the underscore _.extend() method is used on the client side to create the read() function, and on the server side to create the save() function. read takes the file(s) that were dropped, reads the file into an ArrayBuffer, and then calls the included callback, which uploads the file to the server. The save function on the server side reads the ArrayBuffer, and writes the subsequent image file to a specified location on the server (in our case, the .images folder). Third, we created an ondropped event handler, using the 'drop #dropzone' event. This handler determines whether an actual file was dragged and dropped, or if it was simply an HTML <img> element, which contains a URI link in the src property. In the case of a file (determined by files.length), we call the imgFile.read command, and pass a callback with an immediate Meteor.call('uploadIMG'…) method. In the case of an <img> tag, we parse the URI from the src attribute, and use Meteor.call('addURL') to update the Images collection. Fourth, we have our helper functions for updating the UI. These include Template.helpers functions, Template.events functions, and the WebApp.connectedHandlers.use() function, used to properly serve uploaded images without having to update the UI each time a file is uploaded. Remember, Meteor will update the UI automatically on any file change. This unfortunately includes static files, such as images. To work around this, we store our images in a file invisible to Meteor (using .images). To redirect the traffic to that hidden folder, we implement the .use() method to listen for any traffic meant to hit the '/images/' folder, and redirect it accordingly. As with any complex recipe, there are other parts to the code, but this should cover the major aspects of file uploading (the four areas mentioned in the preceding section). There's more… The next logical step is to not simply copy the URIs from remote image files, but rather to download, save, and serve local copies of those remote images. This can also be done using the FileReader and Node fs libraries, and can be done either through the existing client code mentioned in the preceding section, or directly on the server, as a type of cron job. For more information on FileReader, please see the MDN FileReader article, located at https://developer.mozilla.org/en-US/docs/Web/API/FileReader. Summary In this article, you have learned the basic steps to upload images using the HTML FileReader. Resources for Article: Further resources on this subject: Meteor.js JavaScript Framework: Why Meteor Rocks! [article] Quick start - creating your first application [article] Building the next generation Web with Meteor [article]
Read more
  • 0
  • 0
  • 2360

article-image-getting-started-development-environment-using-microsoft-content-management-server
Packt
15 Apr 2010
8 min read
Save for later

Getting Started with the Development Environment Using Microsoft Content Management Server

Packt
15 Apr 2010
8 min read
Visual Web Developer Websites The key difference between developing MCMS applications with Visual Studio .NET 2003 and Visual Studio 2005 is that ASP.NET applications (and therefore MCMS applications) are now built using the Visual Web Developer component of Visual Studio 2005. Visual Web Developer introduces a new "project system", which no longer uses the project (*.csproj) files and simply accesses web applications via HTTP or the file system. In Visual Studio .NET 2003, MCMS applications were created by choosing the MCMS Web Application project type. This project type was effectively a regular ASP.NET web application project with some modifications required by MCMS, such as additional references, the web authoring console, and modifications to the web.config. In Visual Studio 2005, developing web applications has been separated from developing other project types. The feature to develop a web application has been moved into the Visual Web Developer component. To reflect this design change, you are no longer using New Project but New Web Site from the File menu in Visual Studio 2005 to create a new website. Visual Studio 2005 ships with several website templates. The installation of the developer tools for MCMS extends the list of website templates with three additional templates: MCMS Empty Web Project, MCMS Web Application, and MCMS Web Service. These templates are actually modified versions of the similarly named standard templates shipped with Visual Studio 2005. Creating an MCMS Web Application Let's create an MCMS web application using Visual Studio 2005. Open Visual Studio 2005. From the File menu, choose New, followed by Web Site… In the New Web Site dialog, select the MCMS Web Application within the My Templates section. If the MCMS Web Application template does not appear in the My Templates section, the MCMS Visual Studio 2005 templates have not been correctly installed. Please refer to the Visual Studio Templates section of Article 1 for installation details. In the Location combo box, select HTTP, and in the textbox, enter http://localhost/mcmstest. MCMS applications have to be created using a local installation of IIS and do not support being created using the file system, which makes use of the built-in Visual Web Developer Web Server. Note that the New Web Site wizard will not prevent you from configuring an invalid website using the File System and Visual Web Developer Web Server. In the Language combo box (shown in the following figure), select Visual C#, and click on OK. If you wish, you can also choose VB.NET. The samples in this article series are all written in Visual C#. Visual Studio 2005 will create your project and initialize the MCMS Template Explorer. When it's done, you will be presented with an MCMS website with the basic foundation files. The MCMS Template Explorer within Visual Studio 2005 logs on to the MCMS repository using the credentials of the currently logged-on user. If this operation fails, check your MCMS Rights Groups configuration. The Template Explorer does not allow you to specify alternative credentials. Click on the MCMS Template Explorer tab at the bottom of the Solution Explorer, and note that the Template Gallery is accessible. If you don't see the Template Explorer, it is likely you didn't select HTTP in the Location combo box in step 4. You may also not see the Template Explorer if you are using a locale other than US English, in which case you need to install hotfix 914195 as detailed in Article 1. Click on the Solution Explorer tab at the bottom of the MCMS Template Explorer, and click on the Refresh button. Notice that unlike the web applications from ASP.NET 1.x days, the 'CMS' virtual directory is now part of the website. If you examine the contents of the website, its references, and web.config file, you will see that the necessary MCMS files and configuration changes have been added. Checking the Website Configuration Settings in IIS We can verify that Visual Studio 2005 has configured the MCMS application correctly by using the Internet Information Services snap-in. First, let's ensure that the mcmstest website is indeed running on ASP.NET 2.0. From the Start Menu click on Run, enter inetmgr in the Run textbox, and click on OK. In Internet Information Services, expand the tree view to display the mcmstest application. Right-click the mcmstest application and click on Properties. Click the ASP.NET tab and note that the ASP.NET version is correctly configured as 2.0.50727. When developing on Windows Server 2003, the Virtual Website root must run in the same worker process (that is Application Pool) as all MCMS applications so that the MCMS ISAPI Filter can work as expected. This filter cannot route requests across worker-process boundaries. In effect this means that all MCMS applications will share the same ASP.NET version, as ASP.NET does not support side-by-side execution of different versions inside the same worker process. This is not necessary with IIS on Windows XP as it does not use Worker Process Isolation mode. Next, we will check the authentication settings. For now, we will configure the website to use integrated Windows authentication. Only users with a domain or local user account will have access to the site. Later in Article 6 we will show alternative authentication methods such as Forms Authentication. Click on the Directory Security tab followed by the Edit... button, and note that the permissions are correctly inherited from the Virtual Web Site settings. In this example, we will use integrated Windows authentication. Note that we configured the Virtual Web Site to use Windows authentication in Article 1. Authentication methods can be configured on a per-application basis. Click on Cancel and close Internet Information Services. Developing MCMS Web Applications We are now ready to get started on developing our ASP.NET 2.0-based MCMS applications. There are a number of quirks with the MCMS web application templates, which we need to bear in mind during development. Switch back to Visual Studio 2005. In Solution Explorer, right-click on the website (http://localhost/mcmstest), and click on New Folder. Enter Templates as the folder name. Right-click on the Templates folder and click on Add New Item… In the Add New Item dialog, select the MCMS Template File item and enter Basic.aspx in the Name textbox. Click on Add. The new Basic.aspx template file is created and opened in Source View. Examine the contents of Basic.aspx. Correcting Basic.aspx Notice that the Basic.aspx file has a few problems. Some elements are highlighted by IntelliSense "squiggles", and if we attempt to build the website, a number of errors will prevent a successful build. Let's correct the Basic.aspx template file. In the CodeFile attribute of the Page directive on line one, replace CodeFile="~/Basic.aspx.cs" with CodeFile="basic.aspx.cs" The MCMS Web Application New Item template doesn't recognize that our new template file has been created in a subdirectory, and therefore the CodeFile attribute is incorrect. New templates in the web root are not affected. From the Build menu, choose Build Web Site. Notice that the website now builds, but still includes a number of errors. Correct the DOCTYPE. On line 19, replace <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"> with <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> Correct the <html> element. On line 20, replace <html> with <html >. Delete the comments on lines 4 through 17. The comments within an inline ASP script block (<% %>) are unnecessary. Delete the <meta> tags on lines 10 through 13. <meta name="GENERATOR" content="Microsoft Visual Studio .NET 8.0"><meta name="CODE_LANGUAGE" content="C#"><meta name="vs_defaultClientScript" content="JavaScript"><meta name="vs_targetSchema"content="http://schemas.microsoft.com/intellisense/ie5"> These <meta> tags are unnecessary. Correct the WebControls Register directive. On line 2, replace: <%@ Register TagPrefix="cms" Namespace="MicrosoftContentManagement.WebControls" Assembly="Microsoft.ContentManagement.WebControls"%> with <%@ Register Assembly="Microsoft.ContentManagement.WebControls,Version=5.0.1200.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" Namespace="Microsoft.ContentManagement.WebControls"TagPrefix="cms"%> The original Register directive is not correctly recognized by Visual Studio 2005, and prevents IntelliSense from including the cms tag prefix. From the Build menu, choose Build Web Site. Notice that the website now builds free of any errors and that the cms tag prefix is understood. Your template file should now be as follows: <%@ Page language="c#" AutoEventWireup="false" CodeFile="Basic.aspx.cs" Inherits="Basic.Basic"%><%@ Register Assembly="Microsoft.ContentManagement.WebControls,Version=5.0.1200.0, Culture=neutral, PublicKeyToken= 31bf3856ad364e35"Namespace="Microsoft.ContentManagement.WebControls"TagPrefix="cms"%><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html > <head> <title>Basic</title> <cms:RobotMetaTag runat="server"></cms:RobotMetaTag> </head> <body> <form id="Form1" method="post" runat="server"> </form> </body></html>
Read more
  • 0
  • 0
  • 2354
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-understanding-jquery-and-wordpress-together
Packt
27 Sep 2010
11 min read
Save for later

Understanding jQuery and WordPress Together

Packt
27 Sep 2010
11 min read
  WordPress 3.0 jQuery Enhance your WordPress website with the captivating effects of jQuery. Enhance the usability and increase visual interest in your WordPress 3.0 site with easy-to-implement jQuery techniques Create advanced animations, use the UI plugin to your advantage within WordPress, and create custom jQuery plugins for your site Turn your jQuery plugins into WordPress plugins and share with the world Implement all of the above jQuery enhancements without ever having to make a WordPress content editor switch over into HTML view   Read more about this book (For more resources on WordPress and jQuery, see here.) Two ways to "plugin" jQuery into a WordPress site You're aware that WordPress is an impressive publishing platform. Its core strength lies in its near perfect separation of content, display, and functionality. Likewise, jQuery is an impressive JavaScript library with a lot of effort spent on making it work across platforms, be very flexible and extensible, and yet, elegantly degradable (if a user doesn't have JavaScript enabled for some reason). You're aware that WordPress themes control the look and feel of your site and that WordPress plugins can help your site do more, but we're going to take a look at exactly how those two components work within the WordPress system and how to use jQuery from either a theme or a WordPress plugin. In doing so, you'll be better able to take advantage of them when developing your jQuery enhancements. Speaking of jQuery enhancements, jQuery scripts can be turned into their own type of plugins, not to be confused with WordPress plugins. This makes the work you do in jQuery easily portable to different projects and uses. Between these three components, themes, WordPress plugins, and jQuery plugins, you'll find that just about anything you can dream of creating is at your fingertips. Even better, you'll realize that most of the work is already done. All three of these component types have extensive libraries of already developed third-party creations. Most are free! If they aren't free, you'll be prepared to determine if they're worth their price. By understanding the basics of editing themes and creating your own WordPress and jQuery plugins, you'll be ready to traverse the world of third-party creations and find the best solutions for your projects. You'll also be able to determine if it's better or faster to work with another developer's themes, plugins, or jQuery plugins, versus creating your own from scratch. WordPress themes overview A WordPress theme is, according to the WordPress codex, a collection of files that work together to produce a graphical interface with an underlying unifying design for a weblog. Themes comprise a collection of template files and web collateral such as images, CSS stylesheets, and JavaScript. Themes are what allow you to modify the way your WordPress site looks, without having to know much about how WordPress works, much less change how it works. There are plenty of sites that host free themes and or sell premium WordPress themes. A quick Google search for "wordpress themes" will give you an idea of the enormity of options available. However, when first looking for or researching themes, a good place to start is always WordPress' free theme gallery where you can easily review and demo different themes and styles: http://wordpress.org/extend/themes/. The next screenshot shows the main page of the WordPress theme's directory: (Move the mouse over the image to enlarge.) Once you've selected a theme to use or work with, you'll activate the theme by navigating to Administration | Appearance | Themes in the left-hand side panel of your WordPress installation's administration panel. The next screenshot displays the Manage Themes panel: That's the minimum you need to know about themes as a WordPress user. The basics of a WordPress theme The WordPress theme essentially contains the HTML and CSS that wrap and style your WordPress content. Thus, it's usually the first place you'll start when incorporating jQuery into a site. Most of the time, this is a good approach. Understanding a bit more about how themes work can only make your jQuery development go a little smoother. Let's take a look at how themes are structured and best practices for editing them. Want to know more about WordPress theme design? This title focuses on what you most need to know to work with jQuery in WordPress. If you're interested in WordPress theme development I highly recommend April Hodge Silver and Hasin Hayer's WordPress 2.7 Complete. Along with covering the complete core competencies for managing a WordPress site, it has an overview on editing and creating standard themes for WordPress. If you want to really dig deep into theme design, my title WordPress 2.8 Theme Design will walk you through creating a working HTML and CSS design mockup and coding it up from scratch. Understanding the template's hierarchy We've discussed that a WordPress theme comprises many file types including template pages. Template pages have a structure or hierarchy to them. That means, if one template type is not present, then the WordPress system will call up the next level template type. This allows developers to create themes that are fantastically detailed, which take full advantage of all of the hierarchy's available template page types, to make the setup unbelievably simple. It's possible to have a fully functioning WordPress theme that consists of no more than an index.php file! To really leverage a theme for jQuery enhancement (not to mention help you with general WordPress troubleshooting), it's good to start with an understanding of the theme's hierarchy. In addition to these template files, themes of course also include image files, stylesheets, and even custom template pages, and PHP code files. Essentially, you can have 14 different default page templates in your WordPress theme, not including your style.css sheet or includes such as header.php, sidebar.php, and searchform.php. You can have more template pages than that if you take advantage of WordPress' capability for individual custom page, category, and tag templates. If you open up the default theme's directory that we've been working with, you'll see most of these template files as well as an image directory, style.css and the js directory with the custom-jquery.js file. The following screenshot shows you the main files in WordPress 3.0's new default theme, Twenty Ten: The next list contains the general template hierarchy rules. The absolute simplest theme you can have must contain an index.php page. If no other specific template pages exist, then index.php is the default. You can then begin expanding your theme by adding the following pages: archive.php trumps index.php when a category, tag, date, or author page is viewed. home.php trumps index.php when the home page is viewed. single.php trumps index.php when an individual post is viewed. search.php trumps index.php when the results from a search are viewed. 404.php trumps index.php when the URI address finds no existing content. page.php trumps index.php when looking at a static page. A custom template page, such as: page_about.php, when selected through the page's Administration panel, trumps page.php, which trumps index.php when that particular page is viewed. category.php trumps archive.php, which then trumps index.php when a category is viewed. A custom category-ID page, such as: category-12.php trumps category.php. This then trumps archive.php, which trumps index.php. tag.php trumps archive.php. This in turn trumps index.php when a tag page is viewed. A custom tag-tagname page, such as: tag-reviews.php trumps tag.php. This trumps archive.php, which trumps index.php. author.php trumps archive.php. This in turn trumps index.php, when an author page is viewed. date.php trumps archive.php. This trumps index.php when a date page is viewed. You can learn more about the WordPress theme template hierarchy here:http://codex.wordpress.org/Template_Hierarchy. A whole new theme If you wanted to create a new theme or if you'll be modifying a theme considerably, you'll want to create a directory with a file structure similar to the hierarchy explained previously. Again, because it's hierarchal, you don't have to create every single page suggested, higher up pages will assume the role unless you decide otherwise. As I've mentioned, it is possible to have a working theme with nothing but an index.php file. I'll be modifying the default theme, yet would like the original default theme available for reference. I'll make a copy of the default theme's directory and rename it to: twentyten-wp-jq. WordPress depends on the theme directories namespace. Meaning, each theme requires a uniquely named folder! Otherwise, you'll copy over another theme. The next screenshot shows this directory's creation: I'll then open up the style.css file and modify the information at the beginning of the CSS file: /* Theme Name: Twenty Ten - <b>edited for Chapter 3 of WordPress & jQuery</b> Theme URI: http://wordpress.org/ Description: The 2010 default theme for WordPress. Author: the WordPress team <b>& Tessa Silver</b> Version: 1.0 Tags: black, blue, white, two-columns, fixed-width, custom-header, custom-background, threaded-comments, sticky-post, translation-ready, microformats, rtl-language-support, editor-style */ ... My "new" theme will then show up in the administration panel's Manage Themes page. You can take a new screenshot to update your new or modified theme. If there is no screenshot, the frame will display a grey box. As the look of the theme is going to change a little, I've removed the screenshot.png file from the directory for now, as you can see in the next screenshot: The Loop We know how useful it is when jQuery "loops" through selected elements in a wrapper for you. WordPress does a little looping of its own; in fact, it's important enough to be named "The Loop". The Loop is an essential part of your WordPress theme. It displays your posts in chronological order and lets you define custom display properties with various WordPress template tags wrapped in HTML markup. The Loop in WordPress is a while loop and therefore starts with the PHP code: while (have_posts()): followed by the template tag the_post(). All the markup and additional template tags are then applied to each post that gets looped through for display. The Loop is then ended with the PHP endwhile statement. Every template page view can have its own loop so that you can modify and change the look and layout of each type of post sort. Every template page is essentially, just sorting your posts in different ways. For example, different category or tag template pages sort and refine your posts down to meet specific criteria. Those sorted posts can appear different from posts on your main page, or in your archive lists, and so on. The next example is a very simple loop taken from WordPress 2.9.2's default Kubrick theme: ... <?php while (have_posts()) : the_post(); ?> <div <?php post_class() ?> id="post-<?php the_ID(); ?>"> <h2> <a href="<?php the_permalink() ?>" rel="bookmark" title="Permanent Link to <?php the_title_attribute(); ?>"> <?php the_title(); ?> </a> </h2> <small><?php the_time('F jS, Y') ?> <!-- by <?php the_author() ?> --> </small> <div class="entry"> <?php the_content('Read the rest of this entry &raquo;'); ?> </div> <p class="postmetadata"> <?php the_tags('Tags: ', ', ', '<br />'); ?> Posted in <?php the_category(', ') ?> | <?php edit_post_link('Edit', '', ' | '); ?> <?php comments_popup_link('No Comments &#187;', '1 Comment &#187;', '% Comments &#187;'); ?> </p> </div> <?php endwhile; ?> ... The loop is tucked into a large if/else statement that most importantly checks if there are posts to sort. If there are no matching posts to display, a "Sorry" message is displayed, and the searchform.php file is included with the get_search_form() include tag. The new WordPress 3.0 Twenty Ten theme has its loop separated out into its own template page called loop.php, and it has quite a few more if/else statements within it so that the same loop code can handle many different situations, instead of writing individual loops for different template pages. On the whole, the same basic template tags as well as conditional and include tags are used in the new theme as they were before in the previous default theme. There are now just a few new template and include tags that help you streamline your theme. Let's take a closer look at some of these template tags, include and conditional tags, and the API hooks available to us in a WordPress theme.
Read more
  • 0
  • 0
  • 2352

article-image-painters-lwuit-11-2
Packt
25 Sep 2009
6 min read
Save for later

Painters in LWUIT 1.1

Packt
25 Sep 2009
6 min read
The Painter interface Painter defnes the fundamental interface for all objects that are meant to draw backgrounds or to render on a glass pane. This interface declares  only one method—public void paint(Graphics g, Rectangle rect)—for drawing inside the bounding rectangle (specifed by rect) of a component. The library provides a class that implements Painter and is used as a default background painter for widgets and containers. This is the BackgroundPainter class that has (you guessed it) just the one method paint, which either paints the background image if one has been assigned or fills in the bounding rectangle of the component with the color set in its style. When we want to paint a background ourselves, we can write our own class that implements Painter, and set it as the background painter for the relevant component. The DemoPainter MIDlet, discussed in the next section, shows how this is done. The DemoPainter application This application creates a combo box and uses a theme to set the style for the various elements that are displayed. When the application is compiled without setting a custom background painter, the combo box looks as shown in the following screenshot: The MIDlet code has the following statement commented out in the MIDlet. When uncommented, this statement sets an instance of ComboBgPainter as the background painter for the combo box. combobox.getStyle().setBgPainter(new ComboBgPainter(0x4b338c)); The recompiled application produces the following display showing the new background color: The class responsible for drawing the background is ComboBgPainter, which implements Painter. The constructor for this class takes the color to be used for background painting as its only parameter. The paint method determines the coordinates of the top-left corner of the rectangle to be painted and its dimensions. The rectangle is then flled using the color that was set through the constructor. class ComboBgPainter implements Painter{ private int bgcolor; public ComboBgPainter(int bgcolor) { this.bgcolor = bgcolor; } public void paint(Graphics g, Rectangle rect) { g.setColor(bgcolor); int x = rect.getX(); int y = rect.getY(); int wd = rect.getSize().getWidth(); int ht = rect.getSize().getHeight(); g.fillRect(x, y, wd, ht); }} Drawing a multi-layered background In actual practice, there is hardly any point in using a custom painter just to paint a background color, because the setBgColor method of Style will usually do the job. Themes too can be used for setting background colors. However, painters are very useful when intricate background patterns need to be drawn, and especially if multiple layers are involved. PainterChain, described in the next section, is a class designed for handling such requirements. The PainterChain class It is possible to use more than one painter to render different layers of a background. Such a set of painters can be chained together through the PainterChain class. The only constructor of this class has the form public PainterChain(Painter[] chain) where the parameter chain is an array of painters. The contents of chain will be called sequentially during the painting of a background, starting from the element at index 0 to the last one. There are two methods of the PainterChain class that provide support for adding painters to the array underlying the chain. A new painter can be added either to the top (the prependPainter method) or at the end (the addPainter method) of the array. The array itself can be accessed through the getChain method. PainterChain implements Painter so that the setBgPainter method can be used to set a PainterChain as well as a lone painter, which means the paint method also is present here. The function of paint in PainterChain is to call the paint methods of the painter array elements one by one starting at index 0. The DemoPainterChain application that comes up next shows how a chain of painters can be used to draw the multiple layers of a background. The DemoPainterChain application The DemoPainterChain example uses alphaList to show a painter chain in action. After organizing the form and the list, we set up a painter array to hold the three painters that we shall deploy. Painter[] bgPainters = new Painter[3]; Once we have the array, we create three painters and load them into the array. The frst (lowest) painter, which will fll the bounding rectangle for the list with a designated color, goes in at index 0. The next (middle) layer, at index 1, will draw an image at the center of the list. Finally, the topmost layer for writing a text a little below the center line of the list is inserted at index 2. bgPainters[0] = new Eraser(0x334026);try{ bgPainters[1] = new ImagePainter(Image.createImage( "/a.png"));}catch(java.io.IOException ioe){}bgPainters[2] = new TextPainter("This is third layer"); Now we are ready to instantiate a PainterChain object, and install it as a background painter for the list. PainterChain bgChain = new PainterChain(bgPainters);alphaList.getStyle().setBgPainter(bgChain); The list itself will be drawn on top of these three layers, and the background layers will be visible only because the list is translucent as determined by the transparencyvalue 100, set by the AlphaListRenderer instance used to render alphaList. The list now looks as shown in the following screenshot: A close inspection of the screenshot that we have just seen will show that the layers have indeed been drawn in the same sequence as we had intended. The three painters are very similar in structure to the ComboBgPainter class we came across in the previous example. The Eraser class here is virtually identical to ComboBgPainter. The other two classes work in the same way, except for the fact that TextPainter draws a line of text, while ImagePainter draws an image. class TextPainter implements Painter{ private String text; TextPainter(String text) { //set the text to be written this.text = text; } public void paint(Graphics g, Rectangle rect) { //get the dimension //of background int wd = rect.getSize().getWidth(); int ht = rect.getSize().getHeight(); //create and set font for text Font textFont = Font.createSystemFont( Font.FACE_PROPORTIONAL,Font.STYLE_BOLD,Font.SIZE_LARGE); g.setFont(textFont); //set text color g.setColor(0x0000aa); //position text slightly below centerline int textX = wd/2 - textFont.stringWidth(text)/2; int textY = ht/2 - textFont.getHeight()/2 + 3; //write text g.drawString(text, textX, textY); }}class ImagePainter implements Painter{ private Image bImage; ImagePainter(Image bImage) { //set the image to be drawn this.bImage = bImage; } public void paint(Graphics g, Rectangle rect) { //get the dimensions //of background int wd = rect.getSize().getWidth(); int ht = rect.getSize().getHeight(); //position image at center int imageX = wd/2 - bImage.getWidth()/2; int imageY = ht/2 - bImage.getHeight()/2; //draw image g.drawImage(bImage, imageX, imageY); }} When an image is used on the background of a form, we have seen that it is scaled to occupy the entire form real estate. But if the same image is used as an icon for a label, then it is drawn in its actual size. This task of scaling the image for backgrounds is taken care of by BackgroundPainter, which is used as the default bgPainter.
Read more
  • 0
  • 0
  • 2351

article-image-ajax-dynamic-content-and-interactive-forms
Packt
21 Oct 2009
7 min read
Save for later

AJAX / Dynamic Content and Interactive Forms

Packt
21 Oct 2009
7 min read
Essentially, AJAX is an acronym for Asynchronous JavaScript and XML, and it is the technique of using JavaScript and XML to send and receive data between a web browser and a web server. The biggest advantage this technique has is that you can dynamically update a piece of content on your web page or web form with data from the server (preferably formatted in XML), without forcing the entire page to reload. The implementation of this technique has made it obvious to many web developers that they can start making advanced web applications (sometimes called RIAs—Rich Interface Applications) that work and feel more like software applications, instead of like web pages. Keep in mind that the word AJAX is starting to have its own meaning (as you'll also note its occasional use here as well as all over the web as a proper noun, rather than an all-cap acronym). For example, a Microsoft web developer may use VBScript instead of JavaScript to serve up Access Database data that is transformed into JSON (not XML) using a .NET server-side script. Today, that guy's site would still be considered an AJAX site, rather than an AVAJ site (yep, AJAX just sounds cooler). In fact, it's getting to the point where just about anything on a website (that isn't in Flash) that slides, moves, fades, or pops up without rendering a new browser window is considered an 'Ajaxy' site. In truth, a large portion of these sites don't truly qualify as using AJAX, they're just using straight-up JavaScripting. Generally, if you use cool JavaScripts in your WordPress site, it will probably be considered 'Ajaxy', despite not being asynchronous or using any XML. Want more info on this AJAX business? The w3schools site has an excellent introduction to AJAX, explaining it in straight-forward, simple terms. They even have a couple of great tutorials that are fun and easy to accomplish, even if you only have a little HTML, JavaScript, and server-side script (PHP or ASP) experience (no XML experience required) (http://w3schools.com/ajax/). You Still Want AJAX on Your Site? OK! You're here and reading this article because you want AJAX in your WordPress site. I only ask you take the just discussed into consideration and do one or more of the following to prepare. Help your client assess their site's target users first. If everyone is web 2.0 aware, using newer browsers, and are fully mouse-able, then you'll have no problems, AJAX away. But if any of your users are inexperienced with RIA (Rich Interface Application) sites or have accessibility requirements, take some extra care. Again, it's not that you can't or shouldn't use AJAX techniques, just be sure to make allowances for these users. You can easily adjust your site's user expectations upfront, by explaining how to expect the interface to act. Again, you can also offer alternative solutions and themes for people with disabilities or browsers that can't accommodate the AJAX techniques. Remember to check in with Don't Make Me Think, that Steve Krug book I for help with any interface usability questions you may run into. Also, if you're really interested in taking on some AJAX programming yourself, I highly recommend AJAX and PHP by Cristian Darie, Bogdan Brinzarea, Filip Chereches-Tosa, and Mihai Bucica. In it, you'll learn the ins and outs of AJAX development, including handling security issues. You'll also do some very cool stuff like make your own Google-style auto-suggest form and a drag-and-drop sortable list (and that's just two of the many fun things to learn in the book). So, that said, you're now all equally warned and armed with the knowledgeable resources I can think to throw at you. Let's get to it; how exactly do you go about getting something 'Ajaxy' into your WordPress site? Plug-ins and Widgets In these next few sections we're going to cover plug-ins and widgets. Plug-ins and widgets are not a part of your theme. They are additional files with WordPress compatible PHP code that are installed separately into their own directories in your WordPress installation (again, not in your theme directory). Once installed, they are available to be used with any theme that is also installed in your WordPress installation. Even though plug-ins and widgets are not the part of your theme, you might have to prepare your theme to be compatible with them. Let's review a bit about plug-ins and widgets first. Plug-ins WordPress has been built to be a lean, no frills publishing platform. Its simplicity means that with a little coding and PHP know-how, you can easily expand WordPress's capabilities to tailor to your site's specific needs. Plug-ins were developed so that even without a little coding and PHP know-how, users could add extra features and functionality to their WordPress site painlessly, via the Administration Panel. These extra features can be just about anything—from enhancing the experience of your content and forms with AJAX, to adding self-updating 'listening/watching now' lists, Flickr feeds, Google Map info and Events Calendars; you name it, someone has probably written a WordPress plug-in for it. Take a look at the WordPress Plug-in page to see what's available: http://wordpress.org/extend/plugins/ Widgets Widgets are basically just another plug-in! The widget plug-in was developed by AUTOMATTIC (http://automattic.com/code/widgets/), and it allows you to add many more kinds of self-updating content bits and other useful 'do-dads' to your WordPress site. Widgets are intended to be smaller and a little more contained than a full, stand-alone plug-in, and they usually display within the side bar of your theme (or wherever you want; don't panic if you're designing a theme without a sidebar). If you're using WordPress version 2.2 and up, the widget plug-in has become a part of WordPress itself, so you no longer need to install it before installing widgets. Just look through the widget library on WordPress's widget blog and see what you'd like! (http://widgets.wordpress.com/) Trying to download Widgets but the links keep taking you to Plug-in download pages? You'll find that many WordPress Widgets 'piggyback' on WordPress Plug-ins, meaning you'll need the full plug-in installed in order for the widget to work or the widget is an additional feature of the plug-in. So don't be confused when searching for widgets and all of a sudden you're directed to a plug-in page. WordPress Widgets are intended to perform much the same way Mac OS's Dashboard Widgets and Windows Vista Gadgets work. They're there to offer you a quick overview of content or data and maybe let you access a small piece of often used functionality from within a full application or website, without having to take the time to launch the application or navigate to the website directly. In a nutshell, widgets can be very powerful, while at the same time, just don't expect too much. Getting Your Theme Ready for Plug-ins and Widgets In this article, we'll take a look at what needs to be done to prepare your theme for plugins and widgets. Plug-in Preparations Most WordPress Plug-ins can be installed and will work just fine with your theme, with no extra effort on your part. You'll generally upload the plug-in into your wp_content/plugins directory and activate it in your Administration Panel. Here are a few quick tips for getting a plug-in displaying well in your theme: When getting ready to work with a plug-in, read all the documentation provided with the plug-in before installing it and follow the developer's instructions for installing it (don't assume just because you've installed one plug-in, they all get installed the same way). Occasionally, a developer may mention the plug-in was made to work best with a specific theme, and/or the plug-in may generate content with XHTML markup containing a specific CSS id or class rule. In order to have maximum control over the plug-in's display, you might want to make sure your theme's stylesheet accommodates any id or class rules the plug-in outputs. If the developer mentions the plug-in works with say, the Kubrick theme, then, when you install the plug-in, view it using the Kubrick theme (or any other theme they say it works with), so you can see how the plug-in author intended the plug-in to display and work within the theme. You'll then be able to duplicate the appropriate appearance in your theme.
Read more
  • 0
  • 0
  • 2348

article-image-magic
Packt
16 Dec 2013
7 min read
Save for later

The Magic

Packt
16 Dec 2013
7 min read
(For more resources related to this topic, see here.) Application flow In the following diagram, from the Angular manual, you find a comprehensive schematic depiction of the program flow inside Angular: After the browser loads the HTML and parses it into a DOM, the angular.js script file is loaded. This can be added before or at the bottom of the <body> tag, although adding it at the bottom is preferred. Angular waits for the browser to fire the DOMContentLoaded event. This is similar to the way jQuery is bootstrapped, as illustrated in the following code: $(document).ready(function(){ // do jQuery }) In the Angular.js file, towards the end, after the entire code has been parsed by the browser, you will find the following code: jqLite(document).ready(function() { angularInit(document, bootstrap); }); The preceding code calls the function that looks for various flavors of the ng-app directive that you can use to bootstrap your Angular application. ['ng:app', 'ng-app', 'x-ng-app', 'data-ng-app'] Typically, the ng-app directive will be the HTML tag, but in theory, it could be any tag as long as there is only one of them. The module specification is optional and can tell the $injector service which of the defined modules to load. //index.html <!doctype html> <html lang="en" ng-app="tempApp"> <head> …... // app.js ….. angular.module('tempApp', ['serviceModule']) ….. In turn, the $injector service will create $rootscope, the parent scope of all Angular scopes, as the name suggests. This $rootscope is linked to DOM itself as a parent to all other Angular scopes. The $injector service will also create the $compile service that will traverse the DOM and look for directives. These directives are searched for within the complete list of declared Angular internal directives and custom directives at hand. This way, it can recognize directives declared as an element, as attributes, inside the class definition, or as a comment. Now that Angular is properly Bootstrapped, we can actually start executing some application code. This can be done in a variety of ways, shown as follows: In the initial examples, we started creating some Angular code with curly braces using some built-in Angular functions It is also possible to define a controller to control a specific part of the HTML page, as we have shown in the first tempCtrl code snippet We have also shown you how to use Angular's built-in router to manage your application using client-side routing As you can see, Angular extends the capabilities of HTML by providing a clever way to add new directives. The key ingredient here is the $injector service, which provides a way to look up for dependencies and create $rootscope. Different ways of injecting Let's look a bit more at how $injector does its work. Throughout all the examples in this book, we have used the array-style notation to define our controllers, modules, services, and directives. // app/ controllers.js tempApp.controller('CurrentCtrl', ['$scope', 'reading', function ($scope, reading) { $scope.temp = 17; ... This style is commonly referred to as annotation. Each injected value is annotated in the same order inside an array. You may have looked through the AngularJs website and may have seen different ways of defining functions. // angularJs home page JavaScript Projects example functionListCtrl($scope, Project) { $scope.projects = Project.query(); } So, what is the difference and why are we using another way of defining functions? The first difference you may notice is the definition of all the functions in the global scope. For reference, let's call this the simple injection method. The documentation states that this is a concise notation that really is only suited for demo applications because it is nothing but a potential clash waiting to happen. Any other JS library or framework you may have included could potentially have a function with the same name and cause your software to malfunction by executing this function instead of yours. After assigning the Angular module to a variable such as tempApp, we will chain the methods to that variable like we have done in this book so far; you could also just chain them directly as follows: angular.module('tempApp').controller('CurrentCtrl', function($scope) {}) These are essentially the same definitions and don't cause pollution in the global scope. The second difference that you may have noticed is in the way the dependencies are injected in the function. At the time of writing this book, most, if not all of the examples on the AngularJs website use the simple injection method. The dependencies are just parameters in the function definitions. Magically, Angular is able to figure out which parameter is what by the name because the order does not matter. So the preceding example could be rewritten as follows, and it would still function correctly: // reversedangularJs home page JavaScript Projects example functionListCtrl( Project, Scope ) { $scope.projects = Project.query(); } This is not a feature of the JavaScript language, so it must have been added by those smart Angular engineers. The magic behind this can be found in the injector. The parameters of the function are scanned, and Angular extracts the names of the parameters to be able to resolve them. The problem with this approach is that when you deploy a wonderful new application to production, it will probably be minified and even obfuscated. This will rename $scope and Project to something like a and b. Even Angular will then be unable to resolve the dependencies. There are two ways to solve this problem in Angular. You have seen one of them already, but we will explain it further. You can wrap the function in an array and type the names of the dependencies as strings before the function definition in the order in which you supplied them as arguments to the function. // app/ controllers.js tempApp.controller('CurrentCtrl', ['$scope', 'reading', function ($scope, reading) { $scope.temp = 17; ....... The corresponding order of the strings and the function arguments is significant here. Also, the strings should appear before the function arguments. If you prefer the definition without the array notation, there is still some hope. Angular provides a way to inform the injector service of the dependencies you are trying to inject. varCurrentCtrl = function($scope, reading) { $scope.temp = 17; $scope.save = function() { reading.save($scope.temp); } }; CurrentCtrl.$inject = ['$scope', 'reading']; tempApp.controller('CurrentCtrl', CurrentCtrl); As you can see, the definition is a bit more sizable, but essentially the same thing is happening here. The injector is informed by filling the $inject property of the function with an array of the injected dependencies. This is where Angular will then pick them up from. To understand how Angular accomplishes all of this, you should read this excellent blog post by Alex Rothenberg. Here, he explains how all of this works internally. The link to his blog is as follows: http://www.alexrothenberg.com/2013/02/11/the-magic-behind-angularjs-dependency-injection.html. Angular cleverly uses the toString() function of objects to be able to examine in which order the arguments were specified and what their names are. There is actually a third way to specify dependencies called ngmin, which is not native to Angular. It lets you use the simple injection method and parses and translates it to avoid minification problems. https://github.com/btford/ngmin Consider the following code: angular.module('whatever').controller('MyCtrl', function ($scope, $http) { ... }); ngmin will turn the preceding code into the following: angular.module('whatever').controller('MyCtrl', ['$scope','$http', function ($scope, $http) { ... }]);   Summary In this article, we started by looking at how AngularJS is bootstrapped. Then, we looked at how the injector works and why minification might ruin your plans there. We also saw that there are ways to avoid these problems by specifying dependencies differently. Resources for Article: Further resources on this subject: The Need for Directives [Article] Understanding Backbone [Article] Quick start – creating your first template [Article]
Read more
  • 0
  • 0
  • 2348
article-image-learning-fly-forcecom
Packt
17 Apr 2013
20 min read
Save for later

Learning to Fly with Force.com

Packt
17 Apr 2013
20 min read
(For more resources related to this topic, see here.) What is cloud computing? If you have been in the IT industry for some time, you probably know what cloud means. For the rest, it is used as a metaphor for the worldwide network or the Internet. Computing normally indicates the use of computer hardware and software. Combining these two terms, we get a simple definition—use of computer resources over the Internet (as a service). In other words, when the computing is delegated to resources available over the Internet, we get what is called cloud computing. As Wikipedia defines it: Cloud computing is the delivery of computing as a service rather than a product, whereby shared resources, software, and information are provided to computers and other devices as a utility (like the electricity grid) over a network (typically the Internet). Still confused? A simple example will help clarify it. Say you are managing the IT department of an organization, where you are responsible for purchasing hardware and software (licenses) for your employees and making sure they have the right resources to do their jobs. Whenever there is a new hire, you need to go through all the purchase formalities once again to get your user the necessary resources. Soon this turns out to be a nightmare of managing all your software licenses! Now, what if you could find an alternative where you host an application on the Web, which your users can access through their browsers and interact with it? You are freed from maintaining individual licenses and maintaining high-end hardware at the user machines. Voila, we just discovered cloud computing! Cloud computing is the logical conclusion drawn from observing the drawbacks of in-house solutions. The trend is now picking up and is quickly replacing the onpremise software application delivery models that are accompanied with high costs of managing data centers, hardware, and software. All users pay for is the quantum of the services that they use. That is why it's sometimes also known as utility-based computing, as the corresponding payment is resource usage based. Chances are that even before you ever heard of this term, you had been using it unknowingly. Have you ever used hosted e-mail services such as Yahoo, Hotmail, or Gmail where you accessed all of their services through the browser instead of an e-mail client on your computer? Now that is a typical example of cloud computing. Anything that is offered as a service (aaS) is usually considered in the realm of cloud computing. Everything in the cloud means no hardware, no software, so no maintenance and that is what the biggest advantage is. Different types of services that are most prominently delivered on the cloud are as follows: Infrastructure as a service (IaaS) Platform as a service (PaaS) Software as a service (SaaS) Infrastructure as a service (IaaS) Sometimes referred to hardware as a service, infrastructure as a service offers the IT infrastructure, which includes servers, routers, storages, firewalls, computing resources, and so on, in physical or virtualized forms as a service. Users can subscribe to these services and pay on the basis of need and usage. The key player in this domain is Amazon.com, with EC2 and S3 as examples of typical IaaS. Elastic Cloud Computing (EC2) is a web service that provides resizable computing capacity in the cloud. Computing resources can be scaled up or down within minutes, allowing users to pay for the actual capacity being used. Similarly, S3 is an online storage web service offered by Amazon, which provides 99.999999999 percent durability and 99.99 percent availability of objects over a given year and stores arbitrary objects (computer files) up to 5 terabytes in size! Platform as a service (PaaS) PaaS provides the infrastructure for development of software applications. Accessed over the cloud, it sits between IaaS and SaaS where it hides the complexities of dealing with underlying hardware and software. It is an application-centric approach that allows developers to focus more on business applications rather than infrastructure-level issues. Developers no longer have to worry about the server upgrades, scalability, load balancing, service availability, and other infrastructure hassles, as these are delegated to the platform vendors. Paas allows development of custom applications by providing the appropriate building blocks and the necessary infrastructure available as a service. An excellent example, in this category, is the Force.com platform, which is a game changer in the aaS, specially in the PaaS domain. It exposes a proprietary application development platform, which is woven around a relational database. It stands at a higher level than another key player in this domain, Google App Engine, which supports scalable web application development in Java and Python on the appropriate application server stack, but does not provide equivalent robust proprietary components or the building blocks as Force.com. Another popular choice (or perhaps not) is Microsoft's application platform called Widows Azure, which can be used to build websites (developed in ASP.NET, PHP, Node.JS), provision virtual machines, and provide cloud services (containers of hosted applications). A limitation with applications built on these platforms is the quota limits, or the strategy to prohibit the monopolization of the shared resources in the multitenant environment. Some developers see this as a restriction, which allows them to build applications with limited capability, but we reckon this as an opportunity to build highly efficient solutions to work within governor limits, while still maintaining the business process sanctity. Specificcally for the Force.com platform, some people consider shortage of skilled resources as a possible limitation, but we think the learning curve is steep on this platform and an experienced resource can pick proprietary languages pretty quickly, average ramp up time spanning anywhere from 15 to 30 days! Software as a service (SaaS) The opposite end of IaaS is SaaS. Business applications are offered as services over the Internet to users who don't have to go through the complex custom application development and implementation cycles. They also don't invest upfront on the IT infrastructure or maintain their software with regular upgrades. All this is taken care of by the SaaS vendors. These business applications normally provide the customization capability to accommodate specific business needs such as user interfaces, business workflows, and so on. Some good examples in this category are the Salesforce.com CRM system and Google Apps services. What is Force.com? Force.com is a natural progression from Salesforce.com, which was started as a sales force automation system offered as a service (SaaS). The need to go beyond the initially offered customizable CRM application and develop custom-based solutions, resulted in a radical shift of cloud delivery model from SaaS to PaaS. The technology that powers Salesforce CRM, whose design fulfills all the prerequisites of being a cloud application, is now available for developing enterprise-level applications. An independent study of the Force.com platform concluded that compared to the traditional Java-based application development platform, development with the Force.com platform is almost five times faster, with about a 40 percent smaller overall project cost and better quality due to rapid prototyping during the requirement gathering—thanks to the declarative aspect of the Force.com development—and less testing due to proven code re-use. What empowers Force.com? Why is Force.com application development so successful? Primarily because of its key architectural features, discussed in the following sections. Multitenancy Multitenancy is a concept that is the opposite of single-tenancy. In the Cloud Computing jargon, a customer or an organization is referred to as tenant. The various downsides and cost inefficiencies of single-tenant models are overcame by the multitenant model. A multitenant application caters to multiple organizations, each working in its own isolated virtual environment called org and sharing a single physical instance and version of the application hosted on the Force.com infrastructure. It is isolated because although the infrastructure is shared, every customer's data, customizations, and code remain secure and insulated from other customers. Multitenant applications run on a single physical instance and version of the application, providing the same robust infrastructure to all their customers. This also means freedom from upfront costs, ongoing upgrades, and maintenance costs. The test methods written by the customers on respective orgs ensure more than 75 percent code coverage and thus help Salesforce.com in regression testing of the Force.com upgrades, releases, and patches. The same is difficult to even visualize with an in-house software application development. Metadata What drives the multitenant applications on Force.com? Nothing else but the metadata-driven architecture of the platform! Think about the following: The platform allows all tenants to coexist at the same time Tenants can extend the standard common object model without affecting others Tenants' data is kept isolated from others in a shared database The platform customizes the interface and business logic without disrupting the services for others The platform's codebase can be upgraded to offer new features without affecting the tenants' customizations The platform scales up with rising demands and new customers To meet all the listed challenges, Force.com has been built upon a metadata-driven architecture, where the runtime engine generates application components from the metadata. All customizations to the standard platform for each tenant are stored in the form of metadata, thus keeping the core Force.com application and the client customizations distinctly separate, making it possible to upgrade the core without affecting the metadata. The core Force.com application comprises the application data and the metadata describing the base application, thus forming three layers sitting on top of each other in a common database, with the runtime engine interpreting all these and rendering the final output in the client browser. As metadata is a virtual representation of the application components and customizations of the standard platform, the statically compiled Force.com application's runtime engine is highly optimized for dynamic metadata access and advanced caching techniques to produce remarkable application response times. Understanding the Force.com stack A white paper giving an excellent explanation of the Force.com stack has been published. It describes various layers of technologies and services that make up the platform. We will also cover it here briefly. The application stack is shown in the following diagram: Infrastructure as a service Infrastructure is the first layer of the stack on top of which other services function. It acts as the foundation for securely and reliably delivering the cloud applications developed by the customers as well as the core Salesforce CRM applications. It powers more than 200 million transactions per day and more than 1.5 million subscribers. The highly managed data centers provide unparalleled redundancy with near-real-time replication, world class security at physical, network, host, data transmission, and database levels, and excellent design to scale both vertically and horizontally. Database as a service The powerful and reliable data persistence layer in the Force.com stack is known as the Force.com database. It sits on top of the infrastructure and provides the majority of the Force.com platform capabilities. The declarative web interface allows user to create objects and fields generating the native application UI around them. Users can also define relationships between objects, create validation rules to ensure data integrity, track history on certain fields, create formula fields to logically derive new data values, create fine-grained security access with the point and click operations, and all of this without writing a single line of code or even worrying about the database backup, tuning, upgrade, and scalability issues! As compared with the relational database, it is similar in the sense that the object (a data instance) and fields are analogous to tables and columns, and Force.com relationships are similar to the referential integrity constraints in a relation DB. But unlike physically separate tables with dedicated storage, Force.com objects are maintained as a set of metadata interpreted on the fly by the runtime engine and all of the application data is stored in a set of a few large database tables. This data is represented as virtual records based on the interpretation of tenants' customizations stored as metadata. Integration as a service Integration as a service utilizes the underlying Force.com database layer and provides the platform's integration capabilities through the open-standards-based web services API. In today's world, most organizations have their applications developed on disparate platforms, which have to work in conjunction to correctly represent and support their internal business processes. Customers' existing applications can connect with Force.com through the SOAP or REST web services to access data and create mashups to combine data from multiple sources. The Force.com platform also allows native applications to integrate with third-party web services through callouts to include information from external systems in organizations' business processes. These integration capabilities of the platform through API (for example, Bulk API, Chatter API, Metadata API, Apex REST API, Apex SOAP API, Streaming API, and so on) can be used by developers to build custom integration solutions to both produce and consume web services. Accordingly, it's been leveraged by many third parties such as Informatica, Cast Iron, Talend, and so on, to create prepackaged connectors for applications and systems such as Outlook, Lotus Notes, SAP, Oracle Financials, and so on. It also allows clouds such as Facebook, Google, and Amazon to talk to each other and build useful mashups. The integration ability is the key for developing mobile applications for various device platforms, which solely rely on the web services exposed by the Force.com platform. Logic as a service A development platform has to have the capability to create business processes involving complex logic. The Force.com platform oversimplifies this task to automate a company's business processes and requirements. The platform logic features can be utilized by both developers and business analysts to build smart database applications that help increase user productivity, improve data quality, automate manual processes, and adapt quickly to changing requirements. The platform allows creating the business logic either through a declarative interface in the form of workflow rules, approval processes, required and unique fields, formula fields, validation rules, or in an advanced form by writing triggers and classes in the platform's programming language—Apex—to achieve greater levels of flexibility, which help define any kind of functionality and business requirement that otherwise may not be possible through the point and click operations. User interface as a service The user interface of platform applications can be created and customized by either of the two approaches. The Force.com builder application, an interface based on point-and-click/drag-and-drop, allows users to build page layouts that are interpreted from the data model and validation rules with user defined customizations, define custom application components, create application navigation structures through tabs, and define customizable reports and user-specific views. For more complex pages and tighter control over the presentation layer, a platform allows users to build custom user interfaces through a technology called Visualforce (VF), which is based on the XML markup tags. The custom VF pages may or may not adopt the standard look and feel based on the stylesheet applied and present data returned from the controller or the logic layer in the structured format. The Visualforce interfaces are either public, private, or a mix of the two. Private interfaces require users to log in to the system before they can access resources, whereas public interfaces, called sites, can be made available on the Internet to anonymous users. Development as a service This a set of features that allow developers to utilize traditional practices for building cloud applications. These features include the following: Force.com Metadata API: Lets developers push changes directly into the XML files describing the organization's customizations and acts as an alternative to platform's interface to manage applications IDE (Integrated Development Environment): A powerful client application built on the Eclipse platform, allowing programmers to code, compile, test, package, and deploy applications A development sandbox: A separate application environment for development, quality assurance, and training of programmers Code Share: A service for users around the globe to collaborate on development, testing, and deployment of the cloud applications Force.com also allows online browser based development providing code assist functionality, repository search, debugging, and so on, thus eliminating the need of a local machine specific IDE. DaaS expands the Cloud Computing development process to include external tools such as integrated development environments, source control systems, and batch scripts to facilitate developments and deployments. Force.com AppExchange This is a cloud marketplace (accessible at http://appexchange.salesforce.com/) that helps commercial application vendors to publish their custom development applications as packages and then reach out to potential customers who can install them on their orgs with merely a button click through the web interface, without going through the hassles of software installation and configuration. Here, you may find good apps that provide functionality, that are not available in Salesforce, or which may require some heavy duty custom development if carried out on-premises! Introduction to governor limits Any introduction to Force.com is incomplete without a mention of governor limits. By nature, all multitenant architecture based applications such as Force.com have to have a mechanism that does not allow the code to abuse the shared resources so that other tenants in the infrastructure remain unaffected. In the Force.com world, it is the Apex runtime engine that takes care of such malicious code by enforcing runtime limits (called governor limits) in almost all areas of programming on the Force.com platform. If these governor limits had not been in place, even the simplest code, such as an endless loop, would consume enough resources to disrupt the service to the other users of the system, as they all share the same physical infrastructure. The concept of governor limits is not just limited to Force.com, but extends to all SaaS/PaaS applications, such as Google App Engine, and is critical for making the cloud-based development platform stable. This concept may prove to be very painful for some people, but there is a key logic to it. The platform enforces the best practices so that the application is practically usable and makes an optimal usage of resources, keeping the code well under governor limits. So the longer you work on Force.com, the more you become familiar with these limits, the more stable your code becomes over time, and the easier it becomes to work around these limits. In one of the forthcoming chapters, we will discover how to work with these governor limits and not against them, and also talk about ways to work around them, if required. Salesforce environments An environment is a set of resources, physical or logical, that let users build, test, deploy, and use applications. In the traditional development model, one would expect to have application servers, web servers, databases, and their costly provisioning and configuration. But in the Force.com paradigm, all that's needed is a computer and an Internet connection to immediately get started to build and test a SaaS application. An environment, or a virtual or logical instance of the Force.com infrastructure and platform, is also called an organization or just org, which is provisioned in the cloud on demand. It has the following characteristics: Used for development, testing, and/or production Contains data and customizations Based on the edition containing specific functionality, objects, storage, and limits Certain restricted functionalities, such as the multicurrency feature (which is not available by default), can be enabled on demand All environments are accessible through a web browser There are broadly three types of environments available for developing, testing, and deploying applications: Production environments: The Salesforce.com environments that have active paying users accessing the business critical data. Development environments: These environments are used strictly for the development and testing applications with data that is not business critical, without affecting production environment. Developer environments are of two types: Developer Edition: This is a free, full-featured copy of the Enterprise Edition, with less storage and users. It allows users to create packaged applications suitable for any Salesforce production environment. It can be of two types: Regular Developer Edition: This is a regular DE org whose sign up is free and the user can register for any number of DE orgs. This is suitable when you want to develop managed packages for distribution through AppExchange or Trialforce, when you are working with an edition where sandbox is not available, or if you just want to explore the Force.com platform for free. Partner Developer Edition: This is a regular DE org but with more storage, features, and licenses. This is suitable when you expect a larger team to work who need a bigger environment to test the application against a larger real-life dataset. Note that this org can only be created with the Salesforce Consulting partners or Force.com ISV. Sandbox: This is nearly an identical copy of the production environment available to Enterprise or Unlimited Edition customers, and can contain data and/or customizations. This is suitable when developing applications for production environments only with no plans to distribute applications commercially through AppExchange or Trialforce, or if you want to test the beta-managed packages. Note that sandboxes are completely isolated from your Salesforce production organization, so operations you perform in your sandboxes do not affect your Salesforce production organization, and vice versa. Types of sandboxes are as follows: Full copy sandbox: Nearly an identical copy of the production environment, including data and customizations Configuration-only sandbox: Contains only configurations and not data from the production environment Developer sandbox: Same as Configuration-only sandbox but with less storage Test environments: These can be either production or developer environments, used speficially for testing application functionality before deploying to production or releasing to customers. These environments are suitable when you want to test applications in production such as environments with more users and storage to run real-life tests. Summary This article talked about the basic concepts of cloud computing. The key takeaway items from this article are the explanations of the different types of cloud-based services such as IaaS, SaaS, and PaaS. We introduced the Force.com platform and its key architectural features that power the platform types, such as multitenant and metadata. We briefly covered the application stack—technology and services layers—that makes up the Force.com platform. We gave an overview of governor limits without going too much detail about their use. We discussed situations where adopting cloud computing may be beneficial. We also discussed the guidelines that help you decide whether your software project should be developed on the Force.com platform or not. Last, but not least, we discussed various environments available to developers and business users and their characteristics and usage. Resources for Article : Further resources on this subject: Monitoring and Responding to Windows Intune Alerts [Article] Sharing a Mind Map: Using the Best of Mobile and Web Featuressil [Article] Force.com: Data Management [Article]
Read more
  • 0
  • 0
  • 2338

article-image-drupal-theming
Packt
14 Sep 2010
9 min read
Save for later

Drupal Theming

Packt
14 Sep 2010
9 min read
(For more resources on Drupal, see here.) Themes The use of themes makes Drupal exceptionally flexible when it comes to working with the site's interface. Because the functionality of the site is by and large decoupled from the presentation of the site, it is quite easy to chop and change the look, without having to worry about affecting the functionality. This is obviously a very useful feature because it frees you up to experiment, knowing that if worst comes to worst, you can reset the default settings and start from scratch. You can think of a theme as a mask for your site that can be modified in order to achieve virtually any design criteria. Of course, different themes have widely varying attributes, so it is important to find the theme that most closely resembles what you are looking for in order to reduce the amount of work needed to match it to your envisaged design. It is also important to understand that not all downloadable themes are of the same quality. Some are designed better than others. This article utilizes Zen, which is one of the cleanest and most flexible around. Different themes are implemented differently. Some themes use fixed layouts with tables (avoid these because web design should not rely on tables), while others use div tags and CSS (favor these as they are far more flexible and powerful)—you should play around with a variety of themes in order to familiarize yourself with a few different ways of creating a web page. As mentioned, we only have space to cover Zen here, but the lessons learned are easily transferred to other themes with a bit of time and practice. Before we go ahead and look at an actual example, it is important to get an overview of how themes are put together in general. Theme anatomy Drupal themes consist of a set of files that define and control the features of Drupal's web pages (ranging from what functionality to include within a page to how individual page elements will be presented) using PHP, HTML, CSS, and images. Different Drupal 7 template files control different regions of a page, as shown in the following diagram: Looking at how theme files are set up within Drupal hints at the overall process and structure of that theme. Bear in mind that there are several ways to create a working theme, and not all themes make use of template files. However, in the case of the Drupal's default theme setup, we have the following: The left-hand column shows the folders contained within the themes directory. There are a number of standard themes, accompanied by the engines folder that houses a phptemplate.engine file, to handle the integration of templates into Drupal's theming system. Looking at the files present in the garland folder, notice that there are a number of PHP Template files suffixed by tpl.php. These files make use of HTML and PHP code to modify Drupal's appearance. The default versions of these files, which are the ones that would be used in the event a theme had not implemented on its own, can be found in the relevant modules directory. For example, the default comment.tpl.php file is found in modules | comment, and the default page.tpl.php file is located, along with others, in the modules | system folder. Each template file focuses on its specific page element or page, with the noted exception of template.php that is used to override non-standard theme functions—that is, not block, box, comment, node, or page. The themes folder also houses the stylesheets along with images, and in the case of the default theme, colors. Of special interest is the .info file that contains information about the theme to allow Drupal to find and set a host of different parameters. A theme's .info file holds the basic information about a theme that Drupal needs to know, namely, its name, description, features, template regions, CSS files, and JavaScript. Here's Garland's .info file: ; $Id: garland.info,v 1.9 2009/12/01 15:57:40 webchick Exp $name = Garlanddescription = A multi-column theme which can be configured to modifycolors and switch between fixed and fluid width layouts.package = Coreversion = VERSIONcore = 7.xengine = phptemplatestylesheets[all][] = style.cssstylesheets[print][] = print.csssettings[garland_width] = fluid; Information added by drupal.org packaging script on 2010-05-23version = "7.0-alpha5"project = "drupal"datestamp = "1274628610" Note that this file holds, amongst other things: Name—A human-readable theme name Description—A description of the theme Core—The major version of Drupal that the theme is compatible with Stylesheets—Stipulate which stylesheets are to be used by the theme These are not the only types of information that can be held by .info files. As we'll see a bit later on, when it's time to add scripts to a theme, they can be added to the .info file too. To quickly see one way in which .info files can be put to work, look closely at the .info file in the update_test_subtheme theme folder in tests (Below garland): ; $Id: update_test_subtheme.info,v 1.1 2009/10/08 15:40:34 dries Exp $name = Update test subthemedescription = Test theme which uses update_test_basetheme as the basetheme.core = 7.xengine = phptemplatebase theme = update_test_basethemehidden = TRUE; Information added by drupal.org packaging script on 2010-05-23version = "7.0-alpha5"project = "drupal"datestamp = "1274628610" Notice that this contains a base theme directive that is used to specify the parent, or base, theme. A sub-theme shares its parents' code, but modifies parts of it to produce a new look, new functionality, or both. Drupal allows us to create new sub-themes by creating a new folder within the themes directory and specifying the base theme directive in the new theme's .info file—just as we saw in update_test_subtheme. In a nutshell, Drupal provides a range of default themeable functions that expose Drupal's underlying data: such as content and information about that content. Themes can pick and choose which snippets of rendered content they want to override—the most popular method being through the use of PHP template files in conjunction with stylesheets and a .info file. Themes and sub-themes are easily created and modified, provided that you have some knowledge of CSS and HTML—PHP helps if you want to do something more complicated. I should make it clear that this system makes building a new theme fairly easy, provided one knows a bit about PHP. Here's the process: Create a new themes folder in the sites | all folder, and add your new theme folder in there—call it whatever you want (provided it is a unique name) Copy the default template files (or files from any other theme you want to modify) across to the new theme directory, along with any other files that are applicable (such as CSS files) Rewrite the .info file to reflect the attributes and requirements of the new theme, including specifying the base theme directive Modify the layout (this is where your PHP and HTML skills come in handy) and add some flavor with your own stylesheet (included into the new theme through the .info file) Before moving on, there's one small issue of practicality that must be addressed. When it is time for you to begin doing a bit of theme development, bear in mind that there are many types of browser and not all of them are created equal. What this means is that a page that is rendered nicely on one browser might look bad, or worse, not even function properly on another. For this reason, you should: Test your site using several different browsers. The Drupal help site has this to say about browsers: It is recommended you use the Firefox browser with a developer toolbar and view the formatted sources' extensions. I wholeheartedly agree. You can obtain a copy of the Firefox browser at www.mozilla.com/firefox. Firefox should also be extended with Firebug, which is an extremely useful tool for client-side web debugging: https://addons.mozilla.org/en-US/firefox/addon/1843/. Choosing a base theme As discussed, Drupal ships with a few default themes, and there are quite a few more available in the Downloads section of the Drupal site. Looking at how Drupal presents its core Themes page under Appearance in the toolbar menu, we can see the following: Any new themes that are added to the site will be used to enable, disable, configure, or set as a default from this page. Be aware that some themes might not implement functionality that is important to your site. Ensure that you test each theme thoroughly before allowing users to select it. Enabling the Stark theme, and setting it as the default theme, causes the site, which has been presented in the standard Garland theme up until now, to look something like this: This is a vast change from the previous look. Notice too that the entire layout of the site has changed—there are no well defined columns, no visually defined header section, and so on. In addition, the previous fonts and colors have also been demolished. Take the time to view each theme that is available by default in order to get a feel for how different themes can produce wildly differing looks and layouts. That is not the end of the story, because the Drupal site also has a whole bunch of themes for us to explore. So head on over to the themes page at http://drupal.org/project/themes and select the relevant version tab to bring up the themes that are available. You have already seen how to download and install other modules, and the process for installing themes is no different—download and extract the contents of the desired theme to the themes folder in sites | default or sites | all. For example, the Zen theme was downloaded and extracted, and provides us with a new option in the list of themes (some downloads will provide a number of sub-themes too), as shown in the following screenshot: Enable and set default causes the site to look like the next screenshot: Notice that while the color scheme is effectively non-existent, the page has retained its overall structure in that it has defined sidebars, a header region, and a central content region. Before we begin customizing this, let's take a look at the configuration settings for this theme.
Read more
  • 0
  • 0
  • 2335

article-image-roles-alfresco-14
Packt
19 Mar 2010
9 min read
Save for later

Roles In Alfresco 1.4

Packt
19 Mar 2010
9 min read
Roles in Alfresco 1.4   The article explains the basics involved in understanding Alfresco authorization and the means to extend its functionality, for example, to adapt it for any special requirements through configuration files. The concepts explained in the article will be useful for anyone who has started working with Alfresco code. In addition to this, a little step-by-step example towards the end helps you extend the initial Alfresco roles. Read out more in the article written by Alfonso Martin. Once you have started with Alfresco, you will realize that you need to create or expand the default roles included with Alfresco. This task (at this moment) must be done manually through configuration files. Before diving into the Alfresco.war file to search for the property configuration files, you will need to understand a few concepts. This article will introduce you to the basics in Alfresco's role policy. The basic concept in action authorization in Alfresco is that of permission. Permissions dictate when an action can be executed on an object by a user. Key concepts in defining authorization policy in Alfresco are: PermissionSets PermissionGroups Permissions GlobalPermissions Such elements are declared in the permissionDefinitions.xml file, located in WEB‑INF/classes/alfresco/model inside the Alfresco.war file. PermissionSet Collections of permission groups, permissions, and dynamic authorities with common attributes: type: (Mandatory) PermissionSet is defined over a type or aspect. If it is defined over a type, then it is applicable for all objects of that type; if it is defined over an aspect, then it is applicable for all objects that have the same aspect. expose: (Optional) This attribute restricts the domain of Type. If the value is all, it is allowed to apply a PermissionSet to objects, which have a parent that satisfies the Type attribute. Otherwise, if the value is selected, the set is only applied to objects that satisfy Type attribute. All is assumed, if the attribute is undefined. PermissionGroup PermissionGroup defines its behaviour through several attributes: name: (Mandatory) Group identifier. type: (Optional) Same as in permissionSet. extends: Determines if the permissionGroup extends the definition of a previous group with the same name. If undefined, default value is false. expose: Restricts the domain of type, and if true, the permissionGroup will be applied to objects (and derivatives) that satisfy the attribute type. If it is undefined, then false is assumed. allowFullControl: If true, group has all privileges, else the privileges are to be assigned. requiresType: If false, permissionGroup can be applied to any object, otherwise only to objects that satisfies type. If undefined, the default value is true. PermissionGroup can be composed by several permissionGroups, whose sub groups will be identified by name and type. If type is omitted, the parent’s type will be used. Permission A permission is the minimal unit to represent an authorization of an action on an object. It is defined by: name: Permission identifier expose: If true, the permission can be applied to derivative objects, otherwise only to objects defined in a permissionGroup context. Default value is false requiresType: If true, a permission can only be applied to objects defined by its group context. If it is omitted, default is false A permission can require the authorization of other permissions, such permissions are identified by: name type on: Specifies where the permission is required—node, parent, or children. implies: If true, the required permission grants itself the parent’s permission. There can be only one required permission with attribute implies equal to true. If omitted, the default value is false. Inside the permission are the declarations of the permissionGroups that contain said permission. These declarations require existent permissionGroups, and can optionally specify the node types where the permission will be applied will be applied to all nodes. GlobalPermission These kinds of permissions are defined outside a permissionSet, and are applied to all nodes in the hierarchy (irrespective of its type and related aspects). This kind of permission prevails over other permission types. Roles Roles in Alfresco are a special type of permission group, concretely permissionGroups, which are defined in permissionSets applied to nodes of type content. To use these roles in the Alfresco GUI, it is necessary to create appropriate tags in the internationalization files, more specifically, the webclient.properties (this file can vary in function of language/country) and is typically located in WEB‑INF/classes/alfresco/messages Predefined Roles Alfresco defines a few basic roles in the permissionDefinitions.xml. These roles are defined in an incremental manner. First, it creates a Consumer role and then defines the rest of the roles, adding functionality to the Consumer role. Predefined roles are: Consumer: Allows read properties, content, and children of a node. Editor: Adds to consumer privileges the ability to write nodes (properties, content, and children) and execute CheckOuts in nodes with aspect lockable. Contributor: Adds to consumer privileges the possibility of adding children and execute CheckOuts in nodes with aspect lockable. Collaborator: This role has the same capabilities as Contributor and Editor. Coordinator: This role has all privileges including the possibility of taking ownership of nodes, and changing its owner. Administrator: Same as Coordinator, it is defined for backward compatibility. Default Permission Policy After a first check out of the permissionDefinitions.xml file in Alfresco 1.4, you will find five major PermissionSets. These sets establish node permissions across the Content Model hierarchy: It starts with base type, which defines basic low-level permissions. The next set defines permissions for the cmobject node types, which at this level are defined in the permissionGroups and later will be extended by roles. Content nodes have a set, which defines the available roles. In addition to these sets, the permissionDefinitions.xml file includes two other sets. One of these sets is for nodes that have the aspect ownable, and the other is for nodes that have the aspect lockable. The permissions defined in such sets allow the execution of typical specialised actions with these kinds of nodes—Check In, Check Out, Take Ownerships, Change owner, Lock, and Unlock. After PermissionSets, the configuration file declares GlobalPermissions. These permissions will be applied to all nodes, and they have priority over other permission types. At the moment, these permissions are: FullControl: Allows users who have any of these roles—ROLE_ADMINISTRATOR or ROLE_OWNER. Unlock, CheckIn and CancelCheckOut: All these allow users who have their role defined as ROLE_LOCK_OWNER New role definition example: This example is based on ideas explained in one of the Alfresco forums. The case study is the typical context, where three specialized folders are needed—Drafts, Pending approval and Published. Drafts: Stores actual working documents. Pending approval: This folder contains documents whose author(s) has/have requested approbation. Published: Stores final versions of documents. Preconditions There exist two differentiated user groups—Creators and Approvers. Creators have full access to folder Drafts and read access in Published; Approvers have full access to Pending approval and Published. Creators need to move documents from Drafts to Pending approval, but cannot see the folder files. To model this situation, we create two user groups called Creators and Approvers. Then create three spaces Drafts, Pending approval and Published. Each space will have the following configuration: Drafts: Uncheck Inherit Parent Space Permissions checkbox. Invite group Creators with role Collaborator. Pending approval: Uncheck Inherit Parent Space Permissions checkbox. Invite group Approvers with role Coordinator. Published: Uncheck Inherit Parent Space Permissions checkbox. Invite group Approvers with role Coordinator. But as you can see, with this configuration there exists a situation with the prerequisite: Creators need to move documents from Drafts to Pending approval, but cannot see the folder or files. Alfresco does not include any role that satisfies this. So, it lets you create it. Open the permissionDefinitions.xml file. First define a low level permissionGroup called CreateNodes, this group should be defined inside permissionSet with type base; in other words, available for all kinds of nodes. Also inside this set we need to declare the permissions that compose this permission group: In our case only the permission _CreateNodes. ...   <permissionSet type="sys:base" expose="all"> .../p>   <permissionGroup name="CreateNodes" expose="true"   allowFullControl="false"/> ...   <permission name="_CreateChildren" expose="false">     <grantedToGroup permissionGroup="CreateChildren"/>     <grantedToGroup permissionGroup="CreateNodes"/> <!-- New -->   </permission> ...   <permissionSet/> Then we need to define a role called Writer (remember, in set with type content), this role extends the behavior of a permission role declared in the set with type cmobject. The permissionGroup, Writer will include previously defined CreatedNode. ...   <permissionSet type="cm:cmobject" expose="selected"> ...     <permissionGroup name="Writer" allowFullControl="false"                                                        expose="true">       <includePermissionGroup type="sys:base"                                       permissionGroup="CreateNodes"/>     </permissionGroup> ...     <permissionSet/>   <permissionSet type="cm:content" expose="selected"> ...     <permissionGroup name="Writer" extends="true" expose="true"/> ... With these additions, we have created a new role called Writer that will solve our little situation. To be allowed to use it in the Alfresco GUI we need only to add a properly internationalization tag in webclient.properties: Writer=Writer role [...] Writer= OurWriter role And now we can invite the Creators group to the Pending approval space with the Writer role. The creators will be able to move documents from Drafts to Pending approval, but they will not be able to read the folder. The last operation is to add a rule called Request approval in the space Drafts that moves documents from Drafts to the space Pending approval. This is a trivial example for homework where you can try to add rules  in spaces with simple workflows to formalize the process  of  approbations and rejections.
Read more
  • 0
  • 0
  • 2334
article-image-place-editing-using-php-and-scriptaculous
Packt
26 Oct 2009
6 min read
Save for later

In-place Editing using PHP and Script.aculo.us

Packt
26 Oct 2009
6 min read
An introduction to the in-place editing feature In-place editing means making the content available for editing just by clicking on it. We hover on the element, allow the user to click on the element, edit the content, and update the new content to our server. Sounds complex? Not at all! It's very simple. Check out the example about www.netvibes.com shown in the following screenshot. You will notice that by just clicking on the title, we can edit and update it. Now, check out the following screenshot to see what happens when we click on the title. In simple terms, in-place editing is about converting the static content into an editable form without changing the place and updating it using AJAX. Getting started with in-place editing Imagine that we can edit the content inside the static HTML tags such as a simple <p> or even a complex <div>. The basic syntax of initiating the constructor is shown as follows: New Ajax.InPlaceEditor(element,url,[options]); The constructor accepts three parameters: element: The target static element which we need to make editable url: We need to update the new content to the server, so we need a URL to handle the request options: Loads of options to fully customize our element as well as the in-place editing feature We shall look into the details of element and url in the next section. For now, let's learn about all the options that we will be using in our future examples. The following set of options is provided by the script.aculo.us library. We can use the following options with the InPlaceEditor object: okButton: Using this option we show an OK button that the user clicks on after editing. By default it is set to true. okText: With this option we set the text value on the OK button. By default this is set to true. cancelLink: This is the button we show when the user wishes to cancel the action. By default it's set to true. cancelText: This is the text we show as a value on the Cancel button. By default it's set to true. savingText: This is the text we show when the content is being saved. By default it's set to Saving. We can also give it any other name. clickToEditText: This is the text string that appears as the control tooltip upon mouse-hover. rows: Using this option we specify how many rows to show to the user. By default it is set to 1. But if we pass more than 1 it would appear as a text area, or it will show a text box. cols: Using this option we can set the number of columns we need to show to the user. highlightColor: With this option we can set the background color of the element. highlightendColor: Using this option we can bring in the use of effects. Specify which color should be set when the action ends. loadingText: When this option is used, we can keep our users informed about what is happening on the page with text such as Loading or Processing Request. loadTextURL: By using this option we can specify the URL at the server side to be contacted in order to load the initial value of the editor when it becomes active. We also have some callback options to use along with in-place editing. onComplete: On any successful completion of a request, this callback option enables us to call functions. onFailure: Using this callback option on a request's failure, we can make a call to functions. Callback: This option calls back functions to read values in the text box, or text area, before initiating a save or an update request. We will be exploring all these options in our hands-on examples. Code usage of the in-place editing features and options Now things are simple from here on. Let's get started with code. First, let's include all the required scripts for in-place editing: <script type="text/javascript" src="src/prototype.js"></script><script type="text/javascript" src="src/scriptaculous.js"></script><script type="text/javascript" src="src/effects.js"></script><script type="text/javascript" src="src/controls.js"></script> Once this is done, let's create a basic HTML page with some <p> and <div> elements, and add some content to them. <body><div id="myDiv"> First move the mouse over me and then click on ME :)</div></body> In this section we will be learning about the options provided with the in-place editing feature. In the hands-on section we will be working with server-side scripts of handling data. Now, it's turn to add some spicy JavaScript code and create the object for InPlaceEditor. In the following piece of code we have passed the element ID as myDIV, a fake URL,and two options okText and cancelText: Function makeEditable() {new Ajax.InPlaceEditor( 'myDIV', 'URL', { okText: 'Update', cancelText: 'Cancel', } );} We will be placing them inside a function and we will call them on page load. So the complete script would look like this: <script>function makeEditable() {new Ajax.InPlaceEditor( 'myDIV', 'URL', { okText: 'Update', cancelText: 'Cancel' } );}</script><body onload="JavaScript:makeEditable();"><div id="myDiv"> First move the mouse over me and then click on ME :)</div></body> Now, save the fi le as Inplace.html. Open it in a browser and you should see the result as shown in the following screenshot: Now, let's add all the options step-by-step. Remember, whatever we are adding now will be inside the definition of the constructor. First let's add rows and columns to the object. new Ajax.InPlaceEditor( 'myDIV', 'URL', { okText: 'Update', cancelText: 'Cancel', rows: 4, cols: 70 }); After adding the rows and cols, we should be able to see the result displayed in the following screenshot: Now, let's set the color that will be used to highlight the element. new Ajax.InPlaceEditor( 'myDIV', 'URL', { okText: 'Update', cancelText: 'Cancel', rows: 4, cols: 70, highlightColor:'#E2F1B1' }); Drag the mouse over the element. Did you notice the change in color? You did? Great! Throughout the book we have insisted on keeping the user informed, so let's add more options to make this more appealing. We will add clickToEditText, which will be used to inform the user when the mouse hovers on the element. new Ajax.InPlaceEditor( 'myDIV', 'URL', { okText: 'Update', cancelText: 'Cancel', rows: 4, cols: 70, highlightColor:'#E2F1B1', clickToEditText: 'Click me to edit' });
Read more
  • 0
  • 0
  • 2333

article-image-enhancing-user-experience-php-5-ecommerce-part-3
Packt
29 Jan 2010
8 min read
Save for later

Enhancing the User Experience with PHP 5 Ecommerce: Part 3

Packt
29 Jan 2010
8 min read
Help! It's out of stock! If we have a product that is out of stock, we need to make it possible for our customers to sign up to be alerted when they are back in stock. If we don't do this, then they will be left with the option of either going elsewhere, or regularly returning to our store to check on the stock levels for that particular product. To try and discourage these customers from going elsewhere a "tell me when it is back in stock" option saves them the need to regularly check back, which would be off-putting. Of course, it is still likely that the customer may go elsewhere; however, if our store is niche, and the products are not available elsewhere, then if we give the customer this option they will feel more valued. There are a few stages involved in extending our framework to support this: Firstly, we need to take into account stock levels. If a product has no stock, we need to insert a new template bit with an "alert me when it is back in stock" form. We need a template to be inserted when this is the case. We then need functionality to capture and store the customer's e-mail address, and possibly their name, so that they can be informed when it is back in stock. Next, we need to be able to inform all of the customers who expressed an interest in a particular product when it is back in stock. Once our customers have been informed of the new stock level of the product, we need to remove their details from the database to prevent them from being informed at a later stage that there are more products in stock. Finally, we will also require an e-mail template, which will be used when sending the e-mail alerts to our customers. Detecting stock levels With customizable products, stock levels won't be completely accurate. Some products may not require stock levels, such as gift vouchers and other non-tangible products. To account for this, we could either add a new field to our database to indicate to the framework that a products stock level isn't required for that particular product, or we could use an extreme or impossible value for the stock level, for example -1 to indicate this. Changing our controller We already have our model set to pull the product stock level from the database; we just need our controller to take this value and use different template bits where appropriate. We could also alter our model to detect stock levels, and if stock is required for a product. if( $productData['stock'] == 0 ){$this->registry->getObject('template')->addTemplateBit( 'stock', 'outofstock.tpl.php' );}elseif( $productData['stock'] > 0 ){$this->registry->getObject('template')->addTemplateBit( 'stock', 'instock.tpl.php' );}else{$this->registry->getObject('template')->getPage()->addTag( 'stock', '' );} This simple code addition imports a template file into our view, depending on the stock level. Out of stock: a new template bit When the product is out of stock, we need a template to contain a form for the user to complete, so that they can register their interest in that product. <h2>Out of stock!</h2><p>We are <strong>really</strong> sorry, but this product is currentlyout of stock. If you let us know your name and email address, wewill let you know when it is back in stock.</p><form action="products/stockalert/{product_path}" method="post"><label for="stock_name">Your name</label><input type="text" id="stock_name" name="stock_name" /><label for="stock_email">Your email address</label><input type="text" id="stock_email" name="stock_email" /><input type="submit" id="stock_submit" name="stock_submit"value="Let me know, when it is back in stock!" /></form> Here we have the form showing our product view, allowing the customer to enter their name and e-mail address: Tell me when it is back in stock please! Once a customer has entered their name, e-mail address, and clicked on the submit button, we need to store these details and associate them with the product. This is going to involve a new database table to maintain the relationship between products and customers who wish to be notified when they are back in stock. Stock alerts database table We need to store the following information in the database to manage a list of customers interested in being alerted when products are back in stock: Customer name Customer e-mail address Product In terms of a database, the following table structure would represent this: Field Type Description ID Integer (Primary Key, Auto Increment) The ID for the stock alert request Customer Varchar The customer's name Email Varchar The customer's e-mail address ProductID Integer The ID of the product the customer wishes to be informed about when it is back in stock The following SQL represents this table: CREATE TABLE `product_stock_notification_requests` (`ID` INT NOT NULL AUTO_INCREMENT PRIMARY KEY ,`customer` VARCHAR( 100 ) NOT NULL ,`email` VARCHAR( 255 ) NOT NULL ,`product` INT NOT NULL ,`processed` BOOL NOT NULL ,INDEX ( `product` , `processed` )) ENGINE = INNODB COMMENT = 'Customer notification requests fornew stock levels'ALTER TABLE `product_stock_notification_requests` ADD FOREIGN KEY ( `product` ) REFERENCES `book4`.`content` (`ID`)ON DELETE CASCADE ON UPDATE CASCADE  More controller changes Some modifications are needed to our product's controller to process the customer's form submission and save it in the stock alerts database table. In addition to the following code, we must also change our switch statement to detect that the customer is visiting the stockalert section, and that the relevant function should be called. private function informCustomerWhenBackInStock(){$pathToRemove = 'products/stockalert/';$productPath = str_replace( $pathToRemove, '', $this->registry->getURLPath() );require_once( FRAMEWORK_PATH . 'models/products/model.php');$this->model = new Product( $this->registry, $productPath ); Once we have included the model and checked that the product is valid, all we need to do is build our insert array, containing the customer's details and the product ID, and insert it into the notifications table. if( $this->model->isValid() ){ $pdata = $this->product->getData(); $alert = array(); $alert['product'] = $pdata['ID']; $alert['customer'] = $this->registry->getObject('db')-> sanitizeData( $_POST['stock_name'] ); $alert['email'] = $this->registry->getObject('db')-> sanitizeData( $_POST['stock_email'] ); $alert['processed'] = 0; $this->registry->getObject('db')-> insertRecords('product_stock_notification_requests', $alert ); // We then inform the customer that we have saved their request. $this->registry->getObject('template')->getPage()-> addTag('message_heading', 'Stock alert saved'); $this->registry->getObject('template')->getPage()-> addTag('message_heading', 'Thank you for your interest in this product, we will email you when it is back in stock.'); $this->registry->getObject('template')-> buildFromTemplates('header.tpl.php', 'message.tpl.php', 'footer.tpl.php');} If the product wasn't valid, we tell them that, so they know the notification request was not saved. else{ $this->registry->getObject('template')->getPage()-> addTag('message_heading', 'Invalid product'); $this->registry->getObject('template')->getPage()-> addTag('message_heading', 'Unfortunately, we could not find the product you requested.'); $this->registry->getObject('template')-> buildFromTemplates('header.tpl.php', 'message.tpl.php', 'footer.tpl.php');}} This code is very basic, and does not validate e-mail address formats, something which must be done before we try to send any e-mails out. It is back! Once the product is back in stock, we need to then alert those customers that the product which they were interested in is back in stock, and that they can proceed to make their purchase. This isn't something we can implement now, as we don't have an administrative interface in place yet. However, we can discuss what is involved in doing this: The administrator alters the stock level. Customers interested in that product are looked up. E-mails for each of those customers are generated with relevant details, such as their name and the name of the product being automatically inserted. E-mails are sent to the customers. The database contains a processed field, so once an e-mail is sent, we can set the processed value to 1, and then once we have alerted all of our customers, we can delete those records. This covers us in the unlikely event that all the new stock sells out while we are e-mailing customers, and a new customer completes the notification form.
Read more
  • 0
  • 0
  • 2332
Modal Close icon
Modal Close icon