Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-q-replication-components-ibm-replication-server
Packt
16 Aug 2010
8 min read
Save for later

Q Replication Components in IBM Replication Server

Packt
16 Aug 2010
8 min read
The individual stages for the different layers are shown in the following diagram: The DB2 database layer The first layer is the DB2 database layer, which involves the following tasks: For unidirectional replication and all replication scenarios that use unidirectional replication as the base, we need to enable the source database for archive logging (but not the target table). For multi-directional replication, all the source and target databases need to be enabled for archive logging. We need to identify which tables we want to replicate. One of the steps is to set the DATA CAPTURE CHANGES flag for each source table, which will be done automatically when the Q subscription is created. This setting of the flag will affect the minimum point in time recovery value for the table space containing the table, which should be carefully noted if table space recoveries are performed. Before moving on to the WebSphere MQ layer, let’s quickly look at the compatibility requirements for the database name, the table name, and the column names. We will also discuss whether or not we need unique indexes on the source and target tables. Database/table/column name compatibility In Q replication, the source and target database names and table names do not have to match on all systems. The database name is specified when the control tables are created. The source and target table names are specified in the Q subscription definition. Now let’s move on to looking at whether or not we need unique indexes on the source and target tables. We do not need to be able to identify unique rows on the source table, but we do need to be able to do this on the target table. Therefore, the target table should have one of: Primary key Unique contraint Unique index If none of these exist, then Q Apply will apply the updates using all columns. However, the source table must have the same constraints as the target table, so any constraints that exist at the target must also exist at the source, which is shown in the following diagram: The WebSphere MQ layer This is the second layer we should install and test—if this layer does not work then Q replication will not work! We can either install the WebSphere MQ Server code or the WebSphere MQ Client code. Throughout this book, we will be working with the WebSphere MQ Server code. If we are replicating between two servers, then we need to install WebSphere MQ Server on both servers. If we are installing WebSphere MQ Server on UNIX, then during the installation process a user ID and group called mqm are created. If we as a DBA want to issue MQ commands, then we need to get our user ID added to the mqm group. Assuming that WebSphere MQ Server has been successfully installed, we now need to create the Queue Managers and the queues that are needed for Q replication. This section also includes tests that we can perform to check that the MQ installation and setup is correct. The following diagram shows the MQ objects that need to be created for unidirectional replication: The following figure shows the MQ objects that need to be created for bidirectional replication: There is a mixture of Local Queue (LOCAL/QL) and Remote Queues (QREMOTE/QR) in addition to Transmission Queues (XMITQ) and channels. Once we have successfully completed the installation and testing of WebSphere MQ, we can move on to the next layer—the Q replication layer. The Q replication layer This is the third and final layer, which comprises the following steps: Create the replication control tables on the source and target servers. Create the transport definitions. What we mean by this is that we somehow need to tell Q replication what the source and target table names are, what rows/columns we want to replicate, and which Queue Managers and queues to use. Some of the terms that are covered in this section are: Logical table Replication Queue Map Q subscription Subscription group (SUBGROUP) What is a logical table? In Q replication, we have the concept of a logical table, which is the term used to refer to both the source and target tables in one statement. An example in a peer-to-peer three-way scenario is shown in the following diagram, where the logical table is made up of tables TABA, TABB, and TABC: What is a Replication/Publication Queue Map? The first part of the transport definitions mentioned earlier is a definition of Queue Map, which identifies the WebSphere MQ queues on both servers that are used to communicate between the servers. In Q replication, the Queue Map is called a Replication Queue Map, and in Event Publishing the Queue Map is called a Publication Queue Map. Let’s first look at Replication Queue Maps (RQMs). RQMs are used by Q Capture and Q Apply to communicate. This communication is Q Capture sending Q Apply rows to apply and Q Apply sending administration messages back to Q Capture. Each RQM is made up of three queues: a queue on the local server called the Send Queue (SENDQ), and two queues on the remote server—a Receive Queue (RECVQ) and an Administration Queue (ADMINQ), as shown in the preceding figures showing the different queues. An RQM can only contain one each of SENDQ, RECVQ, and ADMINQ. The SENDQ is the queue that Q Capture uses to send source data and informational messages. The RECVQ is the queue that Q Apply reads for transactions to apply to the target table(s). The ADMINQ is the queue that Q Apply uses to send control messages back to Q Capture. So using the queues in the first “Queues” figure, the Replication Queue Map definition would be: Send Queue (SENDQ): CAPA.TO.APPB.SENDQ.REMOTE on Source Receive Queue (RECVQ): CAPA.TO.APPB.RECVQ on Target Administration Queue (ADMINQ): CAPA.ADMINQ.REMOTE on Target Now let’s look at Publication Queue Maps (PQMs). PQMs are used in Event Publishing and are similar to RQMs, in that they define the WebSphere MQ queues needed to transmit messages between two servers. The big difference is that because in Event Publishing, we do not have a Q Apply component, the definition of a PQM is made up of only a Send Queue. What is a Q subscription? The second part of the transport definitions is a definition called a Q subscription, which defines a single source/target combination and which Replication Queue Map to use for this combination. We set up one Q subscription for each source/target combination. Each Q subscription needs a Replication Queue Map, so we need to make sure we have one defined before trying to create a Q subscription. Note that if we are using the Replication Center, then we can choose to create a Q subscription even though a RQM does not exist. The wizard will walk you through creating the RQM at the point at which it is needed. The structure of a Q subscription is made up of a source and target section, and we have to specify: The Replication Queue Map The source and target table The type of target table The type of conflict detection and action to be used The type of initial load, if any, should be performed If we define a Q subscription for unidirectional replication, then we can choose the name of the Q subscription—for any other type of replication we cannot. Q replication does not have the concept of a subscription set as there is in SQL Replication, where the subscription set holds all the tables which are related using referential integrity. In Q replication, we have to ensure that all the tables that are related through referential integrity use the same Replication Queue Map, which will enable Q Apply to apply the changes to the target tables in the correct sequence. In the following diagram, Q subscription 1 uses RQM1, Q subscription 2 also uses RQM1, and Q subscription 3 uses RQM3: What is a subscription group? A subscription group is the name for a collection of Q subscriptions that are involved in multi-directional replication, and is set using the SET SUBGROUP command. Q subscription activation In unidirectional, bidirectional, and peer-to-peer two-way replication, when Q Capture and Q Apply start, then the Q subscription can be automatically activated (if that option was specified). For peer-to-peer three-way replication and higher, when Q Capture and Q Apply are started, only a subset of the Q subscriptions of the subscription group starts automatically, so we need to manually start the remaining Q subscriptions.
Read more
  • 0
  • 0
  • 3620

article-image-alfresco-3-business-solutions-document-migration-strategies
Packt
15 Feb 2011
13 min read
Save for later

Alfresco 3 Business Solutions: Document Migration Strategies

Packt
15 Feb 2011
13 min read
Alfresco 3 Business Solutions Practical implementation techniques and guidance for delivering business solutions with Alfresco Deep practical insights into the vast possibilities that exist with the Alfresco platform for designing business solutions. Each and every type of business solution is implemented through the eyes of a fictitious financial organization - giving you the right amount of practical exposure you need. Packed with numerous case studies which will enable you to learn in various real-world scenarios. Learn to use Alfresco's rich API arsenal with ease. Extend Alfresco's functionality and integrate it with external systems. The Best Money CMS project is now in full swing and we have the folder structure with business rules designed and implemented and the domain content model created. It is now time to start importing any existing documents into the Alfresco repository. Most companies that implement an ECM system, and Best Money is no exception, will have a substantial amount of files that they want to import, classify, and make searchable in the new CMS system. The planning and preparation for the document migration actually has to start a lot earlier, as there are a lot of things that need to be prepared: Who is going to manage sorting out files that should be migrated? What is the strategy and process for the migration? What sort of classification should be done during the import? What filesystem metadata needs to be preserved during the import? Do we need to write any temporary scripts or rules just for the import? Document migration strategies The first thing we need to do is to figure out how the document migration is actually going to be done. There are several ways of making this happen. We will discuss a couple of different ways, such as via the CIFS interface and via tools. There are also some general strategies that apply to any migration method. General migration strategies There are some common things that need to be done no matter which import method is used, such as setting up a document migration staging area. Document staging area The end users need to be able to copy or move documents—that they want to migrate—to a kind of staging area that mirrors the new folder structure that we have set up in Alfresco. The best way to set up the staging area is to copy it from Alfresco via CIFS. When this is done the end users can start copying files to the staging area. However, it is a good idea to train the users in the new folder structure before they start copying documents to it. We should talk to them about folder structure changes, what rules and naming conventions have been set up, the idea behind it, and why it should be followed. If we do not train the end users in the new folder structure, they will not honor it and the old structure will get mixed up with the new structure via document migration, and this is not something that we want. We did plan and implement the new structure for today's requirements and future requirements and we do not want it broken before we even start using the system. The end users will typically work with the staging area over some time. It is good if they get a couple of weeks for this. It will take them time to think about what documents they want to migrate and if any re-organization is needed. Some documents might also need to be renamed. Preserving Modified Date on imported documents We know that Best Money wants all their modified dates on the files to be preserved during an import, as they have a review process that is dependent on it. This means that we have to use an import method that can preserve the Modified Date on the network drive files when they are merged into the Alfresco repository. The CIFS interface cannot be used for this as it sets Modified Date to Current Date. There are a couple of methods that can be used to import content into the repository and preserve the Modified Date: Create an ACP file via an external tool and then import it Custom code the import with the Foundation API and turn off the Audit Aspect before the import Use an import tool that also has the possibility to turn off the Audit Aspect At the time of writing (when I am using Alfresco 3.3.3 Enterprise and Alfresco Community 3.4a) there is no easy way to import files and preserve the Modified Date. When a file is added via Alfresco Explorer, Alfresco Share, FTP, CIFS, Foundation API, REST API, and so on, the Created Date and Modified Date is set to "now", so we lose all the Modified Date data that was set on the files on the network drive. The Created Date, Creator, Modified Date, Modifier, and Access Date are all so called Audit properties that are automatically managed by Alfresco if a node has the cm:auditable aspect applied. If we try and set these properties during an import via one of the APIs, it will not succeed. Most people want to import files via CIFS or via an external import tool. Alfresco is working towards supporting preserving dates when using both these methods for import. Currently, there is a solution to add files via the Foundation API and preserve the dates, which can be used by custom tools. The Alfresco product itself also needs this functionality in, for example, the Transfer Service Receiver, so the dates can be preserved when it receives files. The new solution that enables the use of the Foundation API to set Auditable properties manually has been implemented in version 3.3.2 Enterprise and 3.4a Community. To be able to set audit properties do the following: Inject the policy behavior filter in the class that should do the property update: <property name="behaviourFilter" ref="policyBehaviourFilter"/> Then in the class, turn off the audit aspect before the update, it has to be inside a new transaction, as in the following example: RetryingTransactionCallback<Object> txnWork = new RetryingTransactionCallback<Object>() { public Object execute() throws Exception { behaviourFilter.disableBehaviour (ContentModel.ASPECT_AUDITABLE); Then in the same transaction update the Created or Modified Date: nodeService.setProperty(nodeRef, ContentModel.PROP_MODIFIED, someDate); . . . } }; With JDK 6, the Modified Date is the only file data that we can access, so no other file metadata is available via the CIFS interface. If we use JDK 7, there is a new NIO 2 interface that gives access to more metadata. So, if we are implementing an import tool that creates an ACP file, we could use JDK 7 and preserve Created Date, Modified Date, and potentially other metadata as well. Post migration processing scripts When the document migration has been completed, we might want to do further processing of the documents such as setting extra metadata. This is specifically needed when documents are imported into Alfresco via the CIFS interface, which does not allow any custom metadata to be set during the import. There might also be situations, such as in the case of Best Money, where a lot of the imported documents have older filenames (that is, following an older naming convention) with important metadata that should be extracted and applied to the new document nodes. For post migration processing, JavaScript is a convenient tool to use. We can easily define Lucene queries for the nodes we want to process, as the rules have applied domain document types such as Meeting to the imported documents, and we can use regular expressions to match and extract the metadata we want to apply to the nodes. Search restrictions when running post-migration scripts What we have to think about though when running these post-migration scripts, is that the repository now contains a lot of content, so each query we run might very well return much more than 1,000 rows. And 1,000 rows is the default max limit that a search will return. To change this to allow for 5,000 rows to be returned, we have to make some changes to the permission check configuration (Alfresco checks the permissions for each node that is being accessed, so the user running the query is not getting back content that he or she should not have access to). Open the alfresco-global.properties file located in the alfresco/tomcat/shared/classes directory and add the following properties: # The maximum time spent pruning results (was 10000) system.acl.maxPermissionCheckTimeMillis=100000 # The maximum number of results to perform permission checks against (was 1000) system.acl.maxPermissionChecks=5000 Unwanted Modified Date updates when running scripts So we have turned off the audit feature during document migration or made some custom code changes to Alfresco, to get the document's Modified Date to be preserved during import. Then we have turned on auditing again so the system behaves in the way the users expect. The last thing we want now is for all those preserved modified dates to be set to the current date when we update metadata. And this is what will happen if we are not running the post-migration scripts with the audit feature turned off. So this is important to think about unless you want to start all over again with the document migration. Versioning problems when running post-migration scripts Another thing that can cause problems is when we have versioning turned on for documents that we are updating the post-migration scripts. If we see the following error: org.alfresco.service.cmr.version.VersionServiceException: 07120018 The current implementation of the version service does not support the creation of branches. By default, new versions will be created even when we just update properties/metadata. This can cause errors such as the preceding error and we might not even be able to check-in and check-out the document. To prevent this error from popping up, and turn off versioning during property updates once and for all, we can set the following property at the same time as we set the other domain metadata in the scripts: legacyContentFile.properties["cm:autoVersionOnUpdateProps"] = false; Setting this property to false effectively turns off versioning during any property/metadata update for the document. Another thing that can be a problem is if folders have been set up as versionable by mistake. The most likely reason for this is that we probably forgot to set up the Versioning Rule to only apply to cm:content (and not to "All Items"). Folders in the workspace://SpacesStore store do not support versioning The WCM system comes with an AVM store that supports advanced folder versioning and change sets. Note that the WCM system can also store its data in the Workspace store. So we need to update the versioning rule to apply to the content and remove the versionable aspect from all folders, which have it applied, before we can update any content in these folders. Here is a script that removes the cm:versionable aspect from any folder having it applied: var store = "workspace://SpacesStore"; var query = "PATH:"/app:company_home//*" AND TYPE:"cm:folder" AND ASPECT:"cm:versionable""; var versionableFolders = search.luceneSearch(store, query); for each (versionableFolder in versionableFolders) { versionableFolder.removeAspect("cm:versionable"); logger.log("Removed versionable aspect from folder: " + versionableFolder.name); } logger.log("Removed versionable aspect from " + versionableFolders.length + " folders"); Post-migration script to extract legacy meeting metadata Best Money has a lot of documents that they are migrating to the Alfresco repository. Many of the documents have filenames following a certain naming convention. This is the case for the meeting documents that are imported. The naming convention for the old imported documents are not exactly the same as the new meeting naming convention, so we have to write the regular expression a little bit differently. An example of a filename with the new naming convention looks like this: 10En-FM.02_3_annex1.doc and the same filename with the old naming convention looks like this: 10Eng-FM.02_3_annex1.doc. The difference is that the old naming convention does not specify a two-character code for language but instead a list that looks like this: Arabic,Chinese,Eng|eng,F|Fr,G|Ger,Indonesian,Jpn,Port,Rus|Russian,Sp,Sw,Tagalog,Turkish. What we are interested in extracting is the language and the department code and the following script will do that with a regular expression: // Regulars Expression Definition var re = new RegExp("^d{2}(Arabic|Chinese|Eng|eng|F|Fr|G|Ger| Indonesian|Ital|Jpn|Port|Rus|Russian|Sp|Sw|Tagalog|Turkish)-(A| HR|FM|FS|FU|IT|M|L).*"); var store = "workspace://SpacesStore"; var query = "+PATH:"/app:company_home/cm:Meetings//*" + TYPE:"cm:content""; var legacyContentFiles = search.luceneSearch(store, query); for each (legacyContentFile in legacyContentFiles) { if (re.test(legacyContentFile.name) == true) { var language = getLanguageCode(RegExp.$1); var department = RegExp.$2; logger.log("Extracted and updated metadata (language=" + language + ")(department=" + department + ") for file: " + legacyContentFile.name); if (legacyContentFile.hasAspect("bmc:document_data")) { // Set some metadata extracted from file name legacyContentFile.properties["bmc:language"] = language; legacyContentFile.properties["bmc:department"] = department; // Make sure versioning is not enabled for property updates legacyContentFile.properties["cm:autoVersionOnUpdateProps"] = false; legacyContentFile.save(); } else { logger.log("Aspect bmc:document_data is not set for document" + legacyContentFile.name); } } else { logger.log("Did NOT extract metadata from file: " + legacyContentFile.name); } } /** * Convert from legacy language code to new 2 char language code * * @param parsedLanguage legacy language code */ function getLanguageCode(parsedLanguage) { if (parsedLanguage == "Arabic") { return "Ar"; } else if (parsedLanguage == "Chinese") { return "Ch"; } else if (parsedLanguage == "Eng" || parsedLanguage == "eng") { return "En"; } else if (parsedLanguage == "F" || parsedLanguage == "Fr") { return "Fr"; } else if (parsedLanguage == "G" || parsedLanguage == "Ger") { return "Ge"; } else if (parsedLanguage == "Indonesian") { return "In"; } else if (parsedLanguage == "Ital") { return ""; } else if (parsedLanguage == "Jpn") { return "Jp"; } else if (parsedLanguage == "Port") { return "Po"; } else if (parsedLanguage == "Rus" || parsedLanguage == "Russian") { return "Ru"; } else if (parsedLanguage == "Sp") { return "Sp"; } else if (parsedLanguage == "Sw") { return "Sw"; } else if (parsedLanguage == "Tagalog") { return "Ta"; } else if (parsedLanguage == "Turkish") { return "Tu"; } else { logger.log("Invalid parsed language code: " + parsedLanguage); return ""; } } This script can be run from any folder and it will search for all documents under the /Company Home/Meetings folder or any of its subfolders. All the documents that are returned by the search are looped through and matched with the regular expression. The regular expression defines two groups: one for the language code and one for the department. So after a document has been matched with the regular expression it is possible to back-reference the values that were matched in the groups by using RegExp.$1 and RegExp.$2. When the language code and the department code properties are set, we also set the cm:autoVersionOnUpdateProps property, so we do not get any problem with versioning during the update.
Read more
  • 0
  • 0
  • 3616

article-image-expressionengine-creating-photo-gallery
Packt
23 Oct 2009
6 min read
Save for later

ExpressionEngine: Creating a Photo Gallery

Packt
23 Oct 2009
6 min read
Install the Photo Gallery Module The photo gallery in ExpressionEngine is considered a separate module, even though it is included with every personal or commercial ExpressionEngine license. Installing it is therefore very simple: Log into the control panel using http://localhost/admin.php or http://www.example.com/admin.php, and select Modules from the top of the screen. About a quarter of the way down the page, we can see the Photo Gallery module. In the far-right column is a link to install it. Click Install. We will see a message at the top of the screen indicating that the photo gallery module was installed. That's it! Setting Up Our Photo Gallery Now that we have installed the photo gallery module, we need to define some basic settings and then create categories that we can use to organize our photos. Define the Basic Settings Still in the Modules tab, the photo gallery module should now have become a clickable link. Click on the Photo Gallery. We are presented with a message that says There are no image galleries. Select to Create a New Gallery. We are now prompted for our Image Folder Name. For our photo galleries, we are going to create a folder for our photos inside the images folder that should already exist. Navigate to C:xampphtdocsimages (or /Applications/MAMP/htdocs/images if using MAMP on a Mac) or to the images folder on your web server, and create a new folder called photos. Inside that folder, we are going to create a specific subfolder for our toast gallery images. (This will keep our article photos separate from any other galleries we may wish to create). Call the new folder toast. If doing this on a web server, set the permissions of the toast folder to 777 (read, write, and execute for owner, group, and public). This will allow everyone to upload images to this folder. Back in ExpressionEngine, type in the name of the folder we just created (toast) and hit Submit. We are now prompted to name our template gallery. We will use the imaginative name of toastgallery so that it is distinguishable from any other galleries we may create in the future. This name is what will be used as the default URL to the gallery and will be used as the template group name for our gallery templates. Hit Submit. We are now prompted to update the preferences for our new gallery. Expand the General Configuration option and define a Photo Gallery Name and Short Name. We are going to use Toast Photos as a Photo Gallery Name and toastphotos as a Short Name. The short name is what will be used in our templates to reference this photo gallery Next, expand the Image Paths section. Here the Image Folder Name should be the same name as the folder we created earlier (in our case toast). For XAMPP users, the Server Path to Image Directory is going to be C:/xampp/htdocs/images/photos/toast, and the Full URL to Image Directory is going to be http://localhost/images/photos/toast. For MAMP users on a Mac or when using a web server, these paths are going to be different depending on your setup. Verify these settings for correctness, making adjustments as necessary. Whenever we upload an image into the image gallery, ExpressionEngine creates three copies of the image—a medium-sized and a thumbnail-sized version of the image, in addition to the original image. The thumbnail image is fairly small, so we are going to double the size of the thumbnail image. Expand the Thumbnail Resizing Preferences section, and instead of a Thumbnail Width of 100, choose a width of 200. Check the box (the one outside of the text box) and the height should update to 150. Hit Submit to save the settings so far. We will review the rest of the settings later. We have now created our first gallery. However, before we can start uploading photos, we need to create some categories. Create Categories For the purposes of our toast website, we are going to create categories based on the seasons: spring, summer, autumn, and winter. We are going to have separate subfolders for each of the categories; these are created automatically when we create the categories. To do this, first select Categories from the new menu that has appeared across the top of the screen. We will see a message that says No categories exist. Select Add a New Category. We are going to use a Category Name of Spring and a Description that describes the category—we will later display this description on our site. We are going to create a Category Folder of spring. Leave the Category Parent as None, and hit Submit. Select Add a New Category, and continue to add three more categories: summer, autumn, and winter in the same way. After we a re done with creating all the categories, use the up and down arrows to order the categories correctly. In our case, we need to move Autumn down so that it appears after Summer. We now have the beginnings of a photo gallery. Next, we will upload our first photos so that we can see how the gallery works. Upload Our First Photos To upload a photo to a photo gallery is pretty straightforward. The example photos we are working with can be downloaded from the Packtpub support page at http://www.packtpub.com/files/code/3797_Graphics.zip. To upload a photo, select New Entry from the menu within the photo gallery module. For the File Name, click the Browse...> button and browse to the photo spring1.jpg. We are going to give this an Entry Title of Spring Flower. For Date, we could either leave it as a default or enter the date that the photo was taken on. We are going to use a date of 2006-04-22. Click on the calendar icon to expand the view to include a calendar that can be easily navigated. We are going to use a Category of Spring and a Status of Open. Leave the box checked to Allow Comments, and write a Caption that describes the photo. The Views allows us to indicate how many times this image has been viewed—in this case we are going to leave it at 0. Hit Submit New Entry when everything is done. We are presented with a message that reads Your file has been successfully submitted, and the image now appears underneath the entry information. In the folder where our image is uploaded, three versions of the same image are made. There is the original file (spring1.jpg), a thumbnail of the original file (spring1_thumb.jpg), and a medium-sized version of the original file (spring1_medium.jpg). Now, click on New Entry and repeat the same steps to upload the rest of the photos, using appropriate categories and descriptions that describe the photos. There are four example photos for each season (for example, winter1.jpg, winter2.jpg, winter3.jpg, and winter4.jpg). Having a few example photos in each category will better demonstrate how the photo gallery works.
Read more
  • 0
  • 0
  • 3602

article-image-apache-axis2-web-services-writing-axis2-module
Packt
22 Feb 2011
14 min read
Save for later

Apache Axis2 Web Services: Writing an Axis2 Module

Packt
22 Feb 2011
14 min read
Apache Axis2 Web Services, 2nd Edition Create secure, reliable, and easy-to-use web services using Apache Axis2. Extensive and detailed coverage of the enterprise ready Apache Axis2 1.5 Web Services / SOAP / WSDL engine. Attain a more flexible and extensible framework with the world class Axis2 architecture. Learn all about AXIOM - the complete XML processing framework, which you also can use outside Axis2. Covers advanced topics like security, messaging, REST and asynchronous web services. Written by Deepal Jayasinghe, a key architect and developer of the Apache Axis2 Web Service project; and Afkham Azeez, an elected ASF and PMC member. Web services are gaining a lot of popularity in the industry and have become one of the major enabler for application integration. In addition, due to the flexibility and advantages of using web services, everyone is trying to enable web service support for their applications. As a result, web service frameworks need to support new and more custom requirements. One of the major goals of a web service framework is to deliver incoming messages into the target service. However, just delivering the message to the service is not enough; today's applications are required to have reliability, security, transaction, and other quality services. In our approach, we will be using code sample to help us understand the concepts better. Brief history of the Axis2 module Looking back at the history of Apache Web Services, the Handler concept can be considered as one of the most useful and interesting ideas. Due to the importance and flexibility of the handler concept, Axis2 has also introduced it into its architecture. Notably, there are some major differences in the way you deploy handlers in Axis1 and Axis2. In Axis1, adding a handler requires you to perform global configuration changes and for an end user, this process may become a little complex. In contrast, Axis2 provides an easy way to deploy handlers. Moreover, in Axis2, deploying a handler is similar to deploying a service and does not require global configuration changes. At the design stage of Axis2, one of the key considerations was to have a mechanism to extend the core functionality without doing much. One of the main reasons behind the design decision was due to the lesson learned from supporting WS reliable messaging in Axis1. The process of supporting reliable messaging in Axis1 involved a considerable amount of work, and part of the reason behind the complex process was due to the limited extensibility of Axis1 architecture. Therefore, learning from a session in Axis1, Axis2 introduced a very convenient and flexible way of extending the core functionality and providing the quality of services. This particular mechanism is known as the module concept. Module concept One of the main ideas behind a handler is to intercept the message flow and execute specific logic. In Axis2, the concept of a module is to provide a very convenient way of deploying service extension. We can also consider a module as a collection of handlers and required resources to run the handlers (for example, third-party libraries). One can also consider a module as an implementation of a web service standard specification. As an illustration, Apache Sandesha is an implementation of WS-RM specification. Apache Rampart is an implementation of WS-security; likewise, in a general module, is an implementation of a web service specification. One of the most important features and aspects of the Axis2 module is that it provides a very easy way to extend the core functionality and provide better customization of the framework to suit complex business requirements. A simple example would be to write a module to log all the incoming messages or to count the number of messages if requested. Module structure Axis1 is one of the most popular web service frameworks and it provides very good support for most of the web service standards. However, when it comes to new and complex specifications, there is a significant amount of work we need to do to achieve our goals. The problem becomes further complicated when the work we are going to do involves handlers, configuration, and third-party libraries. To overcome this issue, the Axis2 module concept and its structure can be considered as a good candidate. As we discussed in the deployment section, both Axis2 services and modules can be deployed as archive files. Inside any archive file, we can have configuration files, resources, and the other things that the module author would like to have. It should be noted here that we have hot deployment and hot update support for the service; in other words, you can add a service when the system is up and running. However, unfortunately, we cannot deploy new modules when the system is running. You can still deploy modules, but Axis2 will not make the changes to the runtime system (we can drop them into the directory but Axis2 will not recognize that), so we will not use hot deployment or hot update. The main reason behind this is that unlike services, modules tend to change the system configurations, so performing system changes at the runtime to an enterprise-level application cannot be considered a good thing at all. Adding a handler into Axis1 involves global configuration changes and, obviously, system restart. In contrast, when it comes to Axis2, we can add handlers using modules without doing any global level changes. There are instances where you need to do global configuration changes, which is a very rare situation and you only need to do so if you are trying to add new phases and change the phase orders. You can change the handler chain at the runtime without downer-starting the system. Changing the handler chain or any global configuration at the runtime cannot be considered a good habit. This is because in a production environment, changing runtime data may affect the whole system. However, at the deployment and testing time this comes in handy. The structure of a module archive file is almost identical to that of a service archive file, except for the name of the configuration file. We know that for a service archive file to be a valid one, it is required to have a services.xml. In the same way, for a module to be a valid module archive, it has to have a module.xml file inside the META-INF directory of the archive. A typical module archive file will take the structure shown in the following screenshot. We will discuss each of the items in detail and create our own module in this article as well. Module configuration file (module.xml) The module archive file is a self-contained and self-described file. In other words, it has to have all the configuration required to be a valid and useful module. Needless to say, that is the beauty of a self-contained package. The Module configuration file or module.xml file is the configuration file that Axis2 can understand to do the necessary work. A simple module.xml file has one or more handlers. In contrast, when it comes to complex modules, we can have some other configurations (for example, WS policies, phase rules) in a module.xml. First, let's look at the available types of configurations in a module.xml. For our analysis, we will use a module.xml of a module that counts all the incoming and outgoing messages. We will be discussing all the important items in detail and provide a brief description for the other items: Handlers alone with phase rules Parameters Description about module Module implementation class WS-Policy End points Handlers and phase rules A module is a collection of handlers, so a module could have one or more handlers. Irrespective of the number of handlers in a module, module.xml provides a convenient way to specify handlers. Most importantly, module.xml can be used to provide enough configuration options to add a handler into the system and specify the exact location where the module author would like to see the handler running. Phase rules can be used as a mechanism to tell Axis2 to put handlers into a particular location in the execution chain, so now it is time to look at them with an example. Before learning how to write phase rules and specifying handlers in a module.xml, let's look at how to write a handler. There are two ways to write a handler in Axis2: Implement the org.apache.axis2.engine.Handler interface Extend the org.apache.axis2.handlers.AbstractHandler abstract class In this article, we are going to write a simple application to provide a better understanding of the module. Furthermore, to make the sample application easier, we are going to ignore some of the difficulties of the Handler API. In our approach, we will extend the AbstractHandler. When we extend the abstract class, we only need to implement one method called invoke. So the following sample code will illustrate how to implement the invoke method: public class IncomingCounterHandler extends AbstractHandler implements CounterConstants { public InvocationResponse invoke(MessageContext messageContext) throws AxisFault { //get the counter property from the configuration context ConfigurationContext configurationContext = messageContext. getConfigurationContext(); Integer count = (Integer) configurationContext.getProperty(INCOMING_ MESSAGE_COUNT_KEY); //increment the counter count = Integer.valueOf(count.intValue() + 1 + «»); //set the new count back to the configuration context configurationContext.setProperty(INCOMING_MESSAGE_COUNT_KEY, count); //print it out System.out.println(«The incoming message count is now « + count); return InvocationResponse.CONTINUE; } } As we can see, the method takes MessageContext as a method parameter and returns InvocationResponse as the response. You can implement the method as follows: First get the configurationContext from the messageContext. Get the property value specified by the property name. Then increase the value by one. Next set it back to configurationContext. In general, inside the invoke method, as a module author, you have to do all the logic processing, and depending on the result you get, we can decide whether you let AxisEngine continue, suspend, or abort. Depending on your decision, you can return to one of the three following allowed return types: InvocationResponse.CONTINUE Give the signal to continue the message InvocationResponse.SUSPEND The message cannot continue as some of the conditions are not satisfied yet, so you need to pause the execution and wait. InvocationResponse.ABORT Something has gone wrong, therefore you need to drop the message and let the initiator know about it. The message cannot continue as some of the conditions are not satisfied yet, so you need to pause the execution and wait. InvocationResponse.ABORT Something has gone wrong, therefore you need to drop the message and let the initiator know about it. The corresponding CounterConstants class a just a collection of constants and will look as follows: public interface CounterConstants { String INCOMING_MESSAGE_COUNT_KEY = "incoming-message-count"; String OUTGOING_MESSAGE_COUNT_KEY = "outgoing-message-count"; String COUNT_FILE_NAME_PREFIX = "count_record"; } As we already mentioned, the sample module we are going to implement is for counting the number of request coming into the system and the number of messages going out from the system. So far, we have only written the incoming message counter and we need to write the outgoing message counter as well, and the implementation of the out message count hander will look like the following: public class OutgoingCounterHandler extends AbstractHandler implements CounterConstants { public InvocationResponse invoke(MessageContext messageContext) throws AxisFault { //get the counter property from the configuration context ConfigurationContext configurationContext = messageContext. getConfigurationContext(); Integer count = (Integer) configurationContext.getProperty(OUTGOING_ MESSAGE_COUNT_KEY); //increment the counter count = Integer.valueOf(count.intValue() + 1 + «»); //set it back to the configuration configurationContext.setProperty(OUTGOING_MESSAGE_COUNT_KEY, count); //print it out System.out.println(«The outgoing message count is now « + count); return InvocationResponse.CONTINUE; } } The implementation logic will be exactly the same as the incoming handler processing, except for the property name used in two places. Module implementation class When we work with enterprise-level applications, it is obvious that we have to initialize various settings such as database connections, thread pools, reading property, and so on. Therefore, you should have a place to put that logic in your module. We know that handlers run only when a request comes into the system but not at the system initialization time. The module implementation class provides a way to achieve system initialization logic as well as system shutdown time processing. As we mentioned earlier, module implementation class is optional. A very good example of a module that does not have a module implementation class is the Axis2 addressing module. However, to understand the concept clearly in our example application, we will implement a module implementation class, as shown below: public class CounterModule implements Module, CounterConstants { private static final String COUNTS_COMMENT = "Counts"; private static final String TIMESTAMP_FORMAT = "yyMMddHHmmss"; private static final String FILE_SUFFIX = ".properties"; public void init(ConfigurationContext configurationContext, AxisModule axisModule) throws AxisFault { //initialize our counters System.out.println("inside the init : module"); initCounter(configurationContext, INCOMING_MESSAGE_COUNT_KEY); initCounter(configurationContext, OUTGOING_MESSAGE_COUNT_KEY); } private void initCounter(ConfigurationContext configurationContext, String key) { Integer count = (Integer) configurationContext. getProperty(key); if (count == null) { configurationContext.setProperty(key, Integer. valueOf("0")); } } public void engageNotify(AxisDescription axisDescription) throws AxisFault { System.out.println("inside the engageNotify " + axisDescription); } public boolean canSupportAssertion(Assertion assertion) { //returns whether policy assertions can be supported return false; } public void applyPolicy(Policy policy, AxisDescription axisDescription) throws AxisFault { // Configuure using the passed in policy! } public void shutdown(ConfigurationContext configurationContext) throws AxisFault { //do cleanup - in this case we'll write the values of the counters to a file try { SimpleDateFormat format = new SimpleDateFormat(TIMESTAMP_ FORMAT); File countFile = new File(COUNT_FILE_NAME_PREFIX + format. format(new Date()) + FILE_SUFFIX); if (!countFile.exists()) { countFile.createNewFile(); } Properties props = new Properties(); props.setProperty(INCOMING_MESSAGE_COUNT_KEY, configurationContext.getProperty(INCOMING_MESSAGE_ COUNT_KEY).toString()); props.setProperty(OUTGOING_MESSAGE_COUNT_KEY, configurationContext.getProperty(OUTGOING_MESSAGE_ COUNT_KEY).toString()); //write to a file props.store(new FileOutputStream(countFile), COUNTS_ COMMENT); } catch (IOException e) { //if we have exceptions we'll just print a message and let it go System.out.println("Saving counts failed! Error is " + e.getMessage()); } } } As we can see, there are a number of methods in the previous module implementation class. However, notably not all of them are in the module interface. The module interface has only the following methods, but here we have some other methods for supporting our counter module-related stuff: init engageNotify applyPolicy shutdown At the system startup time, the init method will be called, and at that time, the module can perform various initialization tasks. In our sample module, we have initialized both in-counter and out-counter. When we engage this particular module to the whole system, to a service, or to an operation, the engageNotify method will be called. At that time, a module can decide whether the module can allow this engagement or not; say for an example, we try to engage the security module to a service, and at that time, the module finds out that there is a conflict in the encryption algorithm. In this case, the module will not be able to engage and the module throws an exception and Axis2 will not engage the module. In this example, we will do nothing inside the engageNotify method. As you might already know, WS-policy is one of the key standards and plays a major role in the web service configuration. When you engage a particular module to a service, the module policy should be applied to the service and should be visible when we view the WSDL of that service. So the applyPolicy method sets the module policy to corresponding services or operations when we engage the module. In this particular example, we do not have any policy associated with the module, so we do not need to worry about this method as well. As we discussed in the init method, the method shutdown will be called when the system has to shut down. So if we want to do any kind of processing at that time, we can add this logic into that particular method. In our example, for demonstration purposes, we have added code to store the counter values in a file.
Read more
  • 0
  • 0
  • 3601

article-image-instructional-material-using-moodle-19-part-2
Packt
29 Jan 2010
6 min read
Save for later

Instructional Material using Moodle 1.9: Part 2

Packt
29 Jan 2010
6 min read
Keeping discussions on track One of the biggest challenges in using forums for an online class is keeping discussions focused on the topic. This becomes even more difficult when you allow students to create new topics in a forum. Moodle offers two tools that you can use to help keep discussions on track—custom scales and splitting discussions. Use a custom scale to rate relevance Moodle enables you to use a scale to rate student's work. A scale offers you something other than a grade to give the student as feedback. Scales can be used to rate forum postings, assignment submissions, and glossary entries. The following screenshot shows a feedback on the relevance of a posting, given in a custom scale by a teacher: To create and apply a custom scale, follow these steps: Users with the roles Administrator, Course creator, and Teacher can create custom scales. From the Administration block, click on Scales. This displays the Scales page. On the Scales page, click on the Add a new scale button. This displays the Editing scale page. On the Editing scale page: Enter a Name for the scale. When you apply the scale to the forum, you will select the scale by this name. In the Scale box, enter the items on your scale. Separate each item with a comma. Write a Description for your scale. Students can see the description, so use this space to explain how they should interpret the scale. Select the Save changes button. You are now ready to apply the scale. Create or edit the forum to which you want to apply the scale. The key setting on the Editing Forum page is Allow posts to be rated? When you review the student postings in the forum, you can rate each posting using the scale you created, as shown in the following screenshot: When you finish rating the postings, click on the Send in my ratings button at the bottom of the page to save your ratings. Split discussions Users with the role Administrator, Course creator, or Teacher can split a discussion. When you split a discussion at a post, the selected post and the ones below become a new topic. Note that you cannot take a few posts from the middle of a topic and split them into a new discussion. Splitting takes every post that is nested below the selected one and puts it into a new topic. Before the split   After the split   Topic 1              Reply 1-1              Reply 1-2                       Reply 1-2-1                       Reply 1-2-2                       Reply 1-2-3              Reply 1-3              Reply 1-4                       Reply 1-4-1                       Reply 1-4-2    New Topic 1-2               Reply 1-2-1               Reply 1-2-2               Reply 1-2-3  Topic 1                Reply 1-1                Reply 1-3                Reply 1-4                           Reply 1-4-1                Reply 1-4-2   Will splitting change the meaning Splitting a thread can rescue a conversation that has gotten off topic. However, it can also change the meaning of the conversation in ways that you don't expect or want. Note that in the preceding example, after the split, the new topic is moved to the top of the forum. Will that change the meaning of your forum? Let's look at an example. Following is the screenshot showing the fi rst topic in a forum on the October Revolution of Russian history. In this topic, students discuss whether the revolution was a coup or a popular uprising: The teacher made the first posting and several students have posted replies. Some of these replies, as shown in the following screenshot, favor the theory that the revolution was a coup, while others favor the theory of revolution being a popular uprising: Note that the posting by Student2 is a reply(Re) to the posting by Student1. You might have missed that because the reply is not indented. That's because the teacher has selected Display replies flat, with oldest first. If the teacher had selected Display replies in nested form, you would see Student2's reply indented, or nested, under Student1's reply. We can tell that Student2 is replying to Student1 because the subject line indicates it is a reply to Student1 (Re: My vote: popular uprising). The first two postings are pro-uprising. The last posting is pro-coup. It occurs to the teacher that it would facilitate discussion to split the forum into pro-uprising and pro-coup topics. The teacher scrolls down to the pro-coup posting, which just happens to be the last posting in this forum, and clicks on Split, as shown in following screenshot: This will make a new topic out of the pro-coup posting: Will splitting move replies you want to keep in place In this example, the teacher was lucky. Under the pro-coup posting, there were no pro-uprising replies. If there were, those replies would have come with the pro-coup posting, and the teacher would not have been able to make a topic that was completely pro-coup.
Read more
  • 0
  • 0
  • 3598

article-image-installing-and-configuring-joomla-15
Packt
25 Sep 2010
7 min read
Save for later

Installing and Configuring Joomla! 1.5

Packt
25 Sep 2010
7 min read
  Building job sites with Joomla! A practical stepwise tutorial to build your professional website using Joomla!  Build your own monster.com using Joomla!  Take your job site to the next level using commercial Jobs! Extension  Administrate and publish your Joomla! job site easily using the Joomla! 1.5 administrator panel and Jobs! Pro control panel interface  Boost your job site ranking in search engines using Joomla! SEO Introduction You may have various approaches for building a jobsite, with job search and registration facilities for users and providing several services to your clients such as job posting, online application process, resume search, and so on. Joomla! is one of the best approaches and an affordable solution for building your jobsite, even if you are a novice to Joomla!. This is because Joomla! is a free, open source Content Management System (CMS) , which provides one of the most powerful web application development frameworks available today. These are all reasons for building a jobsite with Joomla!: It has a friendly interface for all types of users—designers, developers, authors, and administrators. This CMS is growing rapidly and improving since its release. Joomla! is designed to be easy to install and set up even if you're not an advanced user. Another advantage is that you need less time and effort to build a jobsite with Joomla!. You need to use a Joomla! jobsite extension to build your jobsite and you can use the commercial extension Jobs! because it's fully equipped to operate a jobsite, featuring tools to manage jobs, resumes, applications, and subscriptions. If you are looking for a jobsite such as Monster, Career Builder, a niche jobs listing such as Tech Crunch, or just posting job ads on your company site, Jobs! is an ideal solution. To know more about this extension, visit its official website http://www.instantphp.com/ Jobs! has two variations—Jobs! Pro and Jobs! Basic . The Jobs! Pro provides some additional features and facilities, which are not available in Jobs! Basic. You can use any one of them, depending upon your needs and budget. But if you need full control over your jobsite and more customization facilities, then Jobs! Pro is recommended. You can install Jobs! component and its modules easily, like any other Joomla! extension. You need to spend only a few minutes to install and configure Joomla! 1.5 and Jobs! Pro 1.3 or Jobs! Basic 1.0. It is a stepwise setup process. But first you must ensure that your system meets all the requirements that are recommended by developers. Prerequisites for installation of Joomla! 1.5 and Jobs! Joomla! is written in PHP and mainly uses MySQL database to store and manipulate information. Before installing Joomla! 1.5 and Jobs! extension, we need a server environment, that includes the following:     Software/Application Minimum Requirement Recommended Version Website PHP 5 5.2 http//php.net MySQL 4.1 or above 5 http://dev.mysql.com/downloads/mysql/5.0.html Apache 1.3 or above   http://httpd.apache.org IIS 6 7 http://www.iis.net/ mod_mysql mod_xml mod_zlib       You must ensure that you have the MySQL, XML, and zlib functionality enabled within your PHP installation. This is controlled within the php.ini file. Setting up a local server environment In order to run Joomla! properly, we need a server environment with pre-installed PHP and MySQL. In this case, you can use a virtual server or can choose other hosting options. But if you want to try out Joomla! on your own computer before using a remote server, we can set up a local server environment. To set up a server environment, we can use XAMPP solution. It comes equipped with Apache HTTP server, PHP, and MySQL. Installing these components individually is quite difficult and needs more time and effort. To install XAMPP, download the latest version of XAMPP 1.7.x from the Apache friends website: http://www.apachefriends.org/en/xampp.html. Windows operating system users can install XAMPP for Windows in two different variations—self-extracting RAR archive and ZIP archive. If you want to use self-extracting RAR archive, first download the .exe file and then follow these steps: Run the installer file, choose a directory, and click on the Install button. After extracting XAMPP, the setup script setup_xampp.bat will start automatically. After the installation is done, click on Start All Programs | Apache Friends | XAMPP | XAMPP Control Pane|. Start Apache and MySQL by clicking on the Start buttons beside each item. If prompted by Windows Firewall, click on the Unblock button.For more information on installing XAMPP on Windows or troubleshooting, go to the Windows FAQs page: http://www.apachefriends.org/en/faqxampp- windows.html. If you are using Linux platform, download the compressed .tar.gz file and follow these steps for installation: Go to a Linux shell and log in as the system administrator root: su Extract the downloaded archive file to /opt: tar xvfz xampp-linux-1.7.3a.tar.gz -C /opt XAMPP is now installed in the /opt/lampp directory. To start XAMPP, call the command: /opt/lampp/lampp start You should now see something similar to the following on your screen: Starting XAMPP 1.7.3a... LAMPP: Starting Apache... LAMPP: Starting MySQL... LAMPP started.   For more information on installing XAMPP on Linux or troubleshooting, go to the Linux FAQs page: http://www.apachefriends.org/en/faq-xampp-linux.html. If you want to use XAMPP in MAC operating system , download the .dmg file and follow these steps: Open the DMG-Image. Drag and drop the XAMPP folder into your Applications folder. XAMPP is now installed in the /Applications/XAMPP directory. To start XAMPP open XAMPP Control and start Apache and MySQL. After installation of XAMPP in a system, to test your installation, type the following URL in the browser: http://localhost/. You will see the XAMPP start page. Uploading installation packages and files to server Now, we need to copy or transfer Joomla! installation package files to server. Before copying the installation package, we must download Joomla_1.5.15-Stable-Full_ Package.zip from the webpage http://www.joomla.org/download.html, and then extract and unzip it. You can use WinZip or WinRAR to unzip these files. After unzipping the files, you have to copy files on your server root folder (for Apache, it is htdocs folder). If you are not using the XAMPP or local server environment, you need the File Transfer Protocol (FTP) software to transfer files to your server root folder, such as htdocs or wwwroot. The popular FTP software is FileZilla, which is absolutely free and available for different platforms, including Windows, Linux, and Mac OS. You can get it from the website http://filezilla-project.org/. Creating database and user Before installing and configuring Joomla! and Jobs! extension, we also need to create a database and a database user. You can easily add a new database and any user by using phpMyAdmin in XAMPP server environment. To add a database, by using phpMyAdmin, you must follow the following steps: Type the address http://localhost/phpmyadmin in the web browser. The front page of phpMyAdmin will be displayed. Type a name for the database you want to create. For example, my_db in the Create new Database field and then click on the Create button to create the database. To be connected with the database, we need a user account. You can add a user account by clicking on the Privileges tab of phpMyAdmin. You will see all users' information. Click on Add a new User link of Privileges window. After clicking on the link, a new window will appear. Provide the required information in the Login Information section of this window and then click on the Go button. We have now completed the preparation stage of installing Joomla!.
Read more
  • 0
  • 0
  • 3597
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-creating-sheet-objects-and-starting-new-list-using-qlikview-11
Packt
20 Aug 2013
6 min read
Save for later

Creating sheet objects and starting new list using Qlikview 11

Packt
20 Aug 2013
6 min read
(For more resources related to this topic, see here.) How it works... To add the list box for a company, right-click in the blank area of the sheet, and choose New Sheet Object | List Box as shown in the following screenshot: As you can see in the drop-down menu, there are multiple types of sheet objects to choose from such as List Box, Statistics Box, Chart, Input Box, Current Selections Box, Multi Box, Table Box, Button, Text Object, Line/Arrow Object, Slider/Calendar Object, and Bookmark Object. We will only cover a few of them in the course of this article. The Help menu and extended examples that are available on the QlikView website will allow you to explore ideas beyond the scope of this article. The Help documentation for any item can be obtained by using the Help menu present on the top menu bar. Choose the List Box sheet object to add the company dimension to our analysis. The New List Box wizard has eight tabs: General, Expressions, Sort, Presentation, Number, Font, Layout, and Caption, as shown in the following screenshot: Give the new List Box the title Company. The Object ID will be system generated. We choose the Company field from the fields available in the datafile that we loaded. We can check the Show Frequency box to show frequency in percent, which will only tell us how many account lines in October were loaded for each company. In the Expressions tab, we can add formulas for analyzing the data. Here, click on Add and choose Average. Since, we only have numerical data in the Amount field, we will use the Average aggregation for the Amount field. Don't forget to click on the Paste button to move your expression into the expression checker. The expression checker will tell you if the expression format is valid or if there is a syntax problem. If you forget to move your expression into the expression checker with the Paste button, the expression will not be saved and will not appear in your application. The Sort tab allows you to change the Sort criteria from text to numeric or dates. We will not change the Sort criteria here. The Presentation tab allows you to adjust things such as column or row header wrap, cell borders, and background pictures. The Number tab allows us to override the default format to tell the sheet to format the data as money, percentage, or date for example. We will use this tab on our table box currently labeled Sum(Amount) to format the amount as money after we have finished creating our new company list box. The Font tab lets us choose the font that we want to use, its display size, and whether to make our font bold. The Layout tab allows us to establish and apply themes, and format the appearance of the sheet object, in this case, the list box. The Caption tab further formats the sheet object and, in the case of the list box, allows you to choose the icons that will appear in the top menu of the list box so that we can use those icons to select and clear selections in our list box. In this example, we have selected search, select all, and clear. We can see that the percentage contribution to the amount and the average amount is displayed in our list box. Now, we need to edit our straight table sheet object along with the amount. Right-click on the straight table sheet object and choose Properties from the pop-up menu. In the General tab, give the table a suitable name. In this case, use Sum of Accounts. Then move over to the Number tab and choose Money for the number format. Click on Apply to immediately apply the number format, and click on OK to close the wizard. Now our straight table sheet object has easier to read dollar amounts. One of the things we notice immediately in our analysis is that we are out of balance by one dollar and fifty-nine cents, as shown in the following screenshot: We can analyze our data just using the list boxes, by selecting a company from the Company list and seeing which account groups and which cost centers are included (white) and which are excluded (gray). Our selected Company shows highlighted in green: By selecting Cheyenne Holding, we can see that it is indeed a holding company and has no manufacturing groups, sales accounting groups, or cost centers. Also the company is in balance. But what about a more graphic visual analysis? To create a chart to further visualize and analyze our data, we are going to create a new sheet object. This time we are going to create a bar chart so that we can see various company contributions to administrative costs or sales by the Acct.5 field, and the account number. Just as when we created the company list box, we right-click on the sheet and choose New Sheet Object | Chart. This opens the following Chart Properties wizard for us: We follow the steps through the chart wizard by giving the chart a name, selecting the chart type, and the dimensions we want to use. Again our expression is going to be SUM(Amount), but we will use the Label option and name it Total Amount in the Expression tab. We have selected the Company and Acct.5 dimensions in the Dimension tab, and we take the defaults for the rest of the wizard tabs. When we close the wizard, the new bar chart appears on our sheet, and we can continue our analysis. In the following screenshot, we have chosen Cheyenne Manufacturing for our Company and all Sales/COS Trade to Mexico Branch as Account Groups. These two selection then show us in our straight table the cost centers that are associated with sales/COS trade to Mexico branch. In our bar chart, we see the individual accounts associated with sales/COS trade to Mexico branch and Cheyenne Manufacturing along with the related amounts posted for these accounts. Summary We created more sheet objects, started with a new list box to begin analyzing our loaded data. We alson added dimensions for analysis. Resources for Article: Further resources on this subject: Meet QlikView [Article] Linking Section Access to multiple dimensions [Article] Creating the first Circos diagram [Article]
Read more
  • 0
  • 0
  • 3593

article-image-installing-and-configuring-drupal
Packt
11 Jul 2012
7 min read
Save for later

Installing and Configuring Drupal

Packt
11 Jul 2012
7 min read
(For more resources on Drupal 7, see here.) Installing Drupal There are a number of different ways to install Drupal on a web server, but in this recipe we will focus on the standard, most common installation, which is to say, Drupal running on an Apache server, which runs PHP with a MySQL database. In order to do this we will download the latest Drupal release, and walk you through all of the steps required to get it up and running. Getting ready Before beginning, you need to ensure that you meet the following minimal requirements: Web hosting with FTP access (or file access through a control panel). A server running PHP 5.2.5+ (5.3+ recommended). A blank MySQL database and the login credentials to access it. Ensure that register globals is set to off in the PHP.ini file. You may need to contact your hosting provider to do this. How to do it... The first step is to download the latest Drupal 7 release from the Drupal download page, which is located at http://drupal.org/project/drupal : This page displays the most recent and recommended releases for both Drupal 6 and 7. It also displays the most recent development versions, but be sure to download the recommended release (development versions are for developers who want to stay on the cutting edge). When the file is downloaded, extract it and upload the files to your chosen web server document root directory on the server. This may take some time. Configure your web server document root and server name (usually through a vhost directive). When the upload is complete, open your browser and in the address bar, type in the server name configured in the previous step to begin the installation wizard. Select Standard option and then select Save and continue: The next screen that you will see is the language selection screen; there should only be one language available at this point. Ensure that English is selected before proceeding: Following a requirements check, you will arrive at the database settings page. Enter your database name, username, and password in the required fields. Unless your database details have been supplied with a specific host name and port, you should leave the advanced options as they are and continue. You will now see the Site configuration page. Under Site information enter the name you would like to appear as the site's name. For Site e-mail address enter an e-mail address. Under the SITE MAINTENANCE ACCOUNT box, enter a username for the admin user (also known as user 1), followed by an e-mail address and password: (Move the mouse over the image to enlarge.) In the Server settings box, select your country from the drop-down, followed by your local time zone. Finally, in the Update notification box, ensure that both options are selected. Click on Save and continue to complete the installation. You will be presented with the congratulations page with a link to your new site. How it works... On the server requirements page, Drupal will carry out a number of tests. It is a requirement that PHP "register globals" is set to off or disabled. Register globals is a feature of PHP which allows global variables to be set from the contents of the Environment, GET, POST, Cookie, and Server variables. It can be a major security risk, as it enables potential hackers to overwrite important variables and gain unauthorized access. The Configure site page is where you specify the site name and e-mail addresses for the site and the admin user. The admin e-mail address will be used to contact the administrator with notifications from the site, and the site e-mail address is used as the originating e-mail address when the site sends e-mails to users. You can change these settings later on in the Site information page in the Configuration section. It's important to select the options to receive the site notifications so that you are aware when software updates are available for your site core and contrib modules; important security updates are available from time to time. There's more... In this recipe we have seen a regular Drupal installation procedure. There are various different ways to install and configure Drupal. We will explore some of these alternatives in the following sections. We will also cover some of the potential pitfalls you may come across with the requirements page. Uploading through a control panel If your web-hosting provider provides web access to your files through a control panel such as CPanel, you can save time by uploading the compressed Drupal installation package and running the unzip function on the file, if that functionality is provided. This will dramatically reduce the amount of time taken to perform the installation. Auto-installers There are other ways in which Drupal can be installed. Your hosting may come with an auto- installer such as Fantastico De Luxe or Softaculous. Both of these services provide a simple way to achieve the same results without the need to use FTP or to configure a database. Database table prefixes At the database setup screen there is an option to use a table prefix. Any prefix entered into the field would be added to the start of all table names in the database. This means that you could run multiple installations of Drupal, or possibly other CMSs from the same database by setting a different prefix. This method, however, will have implications for performance and maintenance. Installing on a Windows environment This recipe deals with installing Drupal on a Linux server. However, Drupal runs perfectly well on an IIS (Windows) server. Using Microsoft's WebMatrix software, it's easy to set up a Drupal site: http://www.microsoft.com/web/drupal Alternative languages Drupal supports many different languages. You can view and download the language packs at http://localize.drupal.org/download. You then need to upload the file to Drupal root/profiles/standard/translations. You will then see the option for that new language in the language selection page of the installation. Verifying the requirements page If all goes to plan, and the server is already configured correctly, then step 3, the server requirements page, will be skipped. However, you may come across problems in a few areas: Register Globals: This should be set to off in the php.ini file. This is very important in securing your site. If you find that register globals is turned on, then you will need to consult your hosting provider's documentation on this feature in order to switch it off. Drupal will attempt to create the following folder: Drupal root/sites/default/ files. If it fails, you may have to manually create this file on the server and give it the permission 755. Drupal will attempt to create a settings.php file by copying the default.settings.php file. If Drupal has trouble doing this, copy the default.settings.php file in the following directory: Drupal root/sites/default/default.settings.php and rename the copied file as settings.php. Give settings.php full write access CHMODD 777. After Drupal finishes the installation process, it will try to set the permission of this file to 444; you must check that this has been done, and manually set the file to 444, if it has not. See also See Installing Drupal distributions for more installation options using a preconfigured Drupal distribution. For more information about installing Drupal, see the installation guide at Drupal.org: http://drupal.org/documentation/install
Read more
  • 0
  • 0
  • 3583

article-image-resource-oriented-clients-rest-principles
Packt
22 Oct 2009
8 min read
Save for later

Resource-Oriented Clients with REST Principles

Packt
22 Oct 2009
8 min read
Designing Clients While designing the library service, the ultimate outcome was the mapping of business operations to URIs and HTTP verbs. The client design is governed by this mapping. Prior to service design, the problem statement was analyzed. For consuming the service and invoking the business operations of the service using clients, there needs to be some understanding of how the service intends to solve the problem. In other words, the service, by design, has already solved the problem. However, the semantics of the solution provided by the service needs to be understood by the developers implementing the clients. The semantics of the service is usually documented in terms of business operations and the relationships between those operations. And sometimes, the semantics are obvious. As an example, in the library system, a member returning a book must have already borrowed that book. Theborrow book operation precedes the return book operation. Client design must take these semantics into account. Resource Design Following is the URI and HTTP verb mapping for business operations of the library system: URI HTTP Method Collection Operation Business Operation /book GET books retrieve Get books /book POST books create Add book(s) /book/{book_id} GET books retrieve Get book data /member GET members retrieve Get members /member POST members create Add member(s) /member/{member_id} GET members retrieve Get member data /member/{member_id}/books GET members retrieve Get member borrowings /member/{member_id}/books/{book_id} POST members create Borrow book /member/{member_id}/books/{book_id} DELETE members delete Return book   When it comes to client design, the resource design is given, and is an input to the client design. When it comes to implementing clients, we have to adhere to the design given to us by the service designer. In this example, we designed the API given in the above table, so we are already familiar with the API. Sometimes, you may have to use an API designed by someone else, hence you would have to ensure that you have access to information such as: Resource URI formats HTTP methods involved with each resource URI The resource collection that is associated with the URI The nature of the operation to be executed combining the URI and the HTTP verb The business operation that maps the resource operation to the real world context Looking into the above resource design table, we can identify two resources, book and member. And we could understand some of the semantics associated with the business operations of the resources. Create, retrieve books Create, retrieve members Borrow book, list borrowed books and return book Book ID and member ID could be used to invoke operations specific to a particular book or member instance System Implementation In this section, we will use the techniques on client programming to consume the library service. These techniques include: Building requests using XML Sending requests with correct HTTP verbs using an HTTP client library like CURL Receiving XML responses and processing the received responses to extract information that we require from the response Retrieving Resource Information Here is the PHP source code to retrieve book information. <?php$url = 'http://localhost/rest/04/library/book.php';$client = curl_init($url);curl_setopt($client, CURLOPT_RETURNTRANSFER, 1);$response = curl_exec($client);curl_close($client);$xml = simplexml_load_string($response);foreach ($xml->book as $book) { echo "$book->id, $book->name, $book->author, $book->isbn <br/>n";}?> The output generated is shown below As per the service design, all that is required is to send a GET request to the URL of the book resource. And as per the service semantics, we are expecting the response to be something similar to: <books> <book> <id>1</id> <name>Book1</name> <author>Auth1</author> <isbn>ISBN0001</isbn> </book> <book> <id>2</id> <name>Book2</name> <author>Auth2</author> <isbn>ISBN0002</isbn> </book></books> So in the client, we convert the response to an XML tree. $xml = simplexml_load_string($response); And generate the output that we desire from the client. In this case we print all the books. foreach ($xml->book as $book) { echo "$book->id, $book->name, $book->author, $book->isbn <br/>n";} The output is: 1, Book1, Auth1, ISBN0001 2, Book2, Auth2, ISBN0002 Similarly, we could retrieve all the members with the following PHP script. <?php$url = 'http://localhost/rest/04/library/member.php';$client = curl_init($url);curl_setopt($client, CURLOPT_RETURNTRANSFER, 1);$response = curl_exec($client);curl_close($client);$xml = simplexml_load_string($response);foreach ($xml->member as $member) { echo "$member->id, $member->first_name, $member->last_name <br/>n";}?> Next, retrieving books borrowed by a member. <?php$url = 'http://localhost/rest/04/library/member.php/1/books';$client = curl_init($url);curl_setopt($client, CURLOPT_RETURNTRANSFER, 1);$response = curl_exec($client);curl_close($client);$xml = simplexml_load_string($response);foreach ($xml->book as $book) { echo "$book->id, $book->name, $book->author, $book->isbn <br/>n";}?> Here we are retrieving the books borrowed by member with ID 1. Only the URL differs, the rest of the logic is the same. Creating Resources Books, members, and borrowings could be created using POST operations, as per the service design. The following PHP script creates new book. <?php$url = 'http://localhost/rest/04/library/book.php';$data = <<<XML<books> <book><name>Book3</name><author>Auth3</author><isbn>ISBN0003</isbn></book> <book><name>Book4</name><author>Auth4</author><isbn>ISBN0004</isbn></book></books>XML;$ch = curl_init();curl_setopt($ch, CURLOPT_URL, $url);curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);curl_setopt($ch, CURLOPT_POST, true);curl_setopt($ch, CURLOPT_POSTFIELDS, $data);$response = curl_exec($ch);curl_close($ch);echo $response;?> When data is sent with POST verb to the URI of the book resource, the posted data would be used to create resource instances. Note that, in order to figure out the format of the XML message to be used, you have to look into the service operation documentation. This is where the knowledge on service semantics comes into play. Next is the PHP script to create members. <?php$url = 'http://localhost/rest/04/library/member.php';$data = <<<XML<members><member><first_name>Sam</first_name><last_name>Noel</last_name></member></members>XML;$ch = curl_init();curl_setopt($ch, CURLOPT_URL, $url);curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);curl_setopt($ch, CURLOPT_POST, true);curl_setopt($ch, CURLOPT_POSTFIELDS, $data);$response = curl_exec($ch);curl_close($ch);echo $response;?> This script is very similar to the script that creates books. Only differences are the endpoint address and the XML payload used. The endpoint address refers to the location where the service is located. In the above script the endpoint address of the service is: $url = 'http://localhost/rest/04/library/member.php'; Next, borrowing a book can be done by posting to the member URI with the ID of the member borrowing the book, and the ID of the book being borrowed. <?php$url = 'http://localhost/rest/04/library/member.php/1/books/2';$data = <<<XMLXML;$ch = curl_init();curl_setopt($ch, CURLOPT_URL, $url);curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);curl_setopt($ch, CURLOPT_POST, true);curl_setopt($ch, CURLOPT_POSTFIELDS, $data);$response = curl_exec($ch);curl_close($ch);echo $response;?> Note that, in the above sample, we are not posting any data to the URI. Hence the XML payload is empty: $data = <<<XMLXML; As per the REST architectural principles, we just send a POST request with all resource information on the URI itself. In this example, member with ID 1 is borrowing the book with ID 2. $url = 'http://localhost/rest/04/library/member.php/1/books/2'; One of the things to be noted in the client scripts is that we have used hard coded URLs and parameter values. When you are using these scripts with an application that uses a Web-based user interface, those hard coded values need to be parameterized. And we send a POST request to this URL: curl_setopt($ch, CURLOPT_URL, $url);curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);curl_setopt($ch, CURLOPT_POST, true);curl_setopt($ch, CURLOPT_POSTFIELDS, $data); Note that, even though the XML payload that we are sending to the service is empty, we still have to set the CURLOPT_POSTFIELDS option for CURL. This is because we have set CURLOPT_POST to true and the CRUL library mandates setting POST field option even when it is empty. This script would cause a book borrowing to be created on the server side. When the member.php script receives a request with the from /{member_id}/books/{book_id} with HTTP verb POST, it maps the request to borrow book business operation. So, the URL $url = 'http://localhost/rest/04/library/member.php/1/books/2'; means that member with ID 1 is borrowing the book with ID 2.
Read more
  • 0
  • 0
  • 3580

article-image-implementing-microsoft-net-application-using-alfresco-web-services
Packt
17 Aug 2010
6 min read
Save for later

Implementing a Microsoft .NET Application using the Alfresco Web Services

Packt
17 Aug 2010
6 min read
(For more resources on Alfresco, see here.) For the first step, you will see how to set up the .NET project in the development environment. Then when we take a look at the sample code, we will learn how to perform the following operations from your .NET application: How to authenticate users How to search contents How to manipulate contents How to manage child associations Setting up the project In order to execute samples included with this article, you need to download and install the following software components in your Windows operating system: Microsoft .NET Framework 3.5 Web Services Enhancements (WSE) 3.0 for Microsoft .NET SharpDevelop 3.2 IDE The Microsoft .NET Framework 3.5 is the main framework used to compile the application, and you can download it using the following URL: http://www.microsoft.com/downloads/details.aspx?familyid=333325fd-ae52-4e35-b531-508d977d32a6&displaylang=en. Before importing the code in the development environment, you need to download and install the Web Services Enhancements (WSE) 3.0, which you can find at this address: http://www.microsoft.com/downloads/details.aspx?FamilyID=018a09fd-3a74-43c5-8ec1-8d789091255d. You can find more information about the Microsoft .NET framework on the official site at the following URL: http://www.microsoft.com/net/. From this page, you can access the latest news and the Developer Center where you can find the official forum and the developer community. SharpDevelop 3.2 IDE is an open source IDE for C# and VB.NET, and you can download it using the following URL: http://www.icsharpcode.net/OpenSource/SD/Download/#SharpDevelop3x. Once you have installed all the mentioned software components, you can import the sample project into SharpDevelop IDE in the following way: Click on File | Open | Project/Solution Browse and select this file in the root folder of the samples: AwsSamples.sln Now you should see a similar project tree in your development environment: More information about SharpDevelop IDE can be found on the official site at the following address: http://www.icsharpcode.net/opensource/sd/. From this page, you can download different versions of the product; which SharpDevelop IDE version you choose depends on the .NET version which you would like to use. You can also visit the official forum to interact with the community of developers. Also, notice that all the source code included with this article was implemented extending an existent open source project named dotnet. The dotnet project is available in the Alfresco Forge community, and it is downloadable from the following address:http://forge.alfresco.com/projects/dotnet/. Testing the .NET sample client Once you have set up the .NET solution in SharpDevelop, as explained in the previous section, you can execute all the tests to verify that the client is working correctly. We have provided a batch file named build.bat to allow you to build and run all the integration tests. You can find this batch file in the root folder of the sample code. Notice that you need to use a different version of msbuild for each different version of the .NET framework. If you want to compile using the .NET Framework 3.5, you need to set the following path in your environment: set PATH=%PATH%;%WinDir%Microsoft.NETFrameworkv3.5 Otherwise, you have to set .NET Framework 2.0 using the following path: set PATH=%PATH%;%WinDir%Microsoft.NETFrameworkv2.0.50727 We are going to assume that Alfresco is running correctly and it is listening on host localhost and on port 8080. Once executed, the build.bat program should start compiling and executing all the integration tests included in this article. After a few seconds have elapsed, you should see the following output in the command line: .........****************** Running tests ******************NUnit version 2.5.5.10112Copyright (C) 2002-2009 Charlie Poole.Copyright (C) 2002-2004 James W. Newkirk, Michael C. Two, Alexei A.Vorontsov.Copyright (C) 2000-2002 Philip Craig.All Rights Reserved.Runtime Environment - OS Version: Microsoft Windows NT 5.1.2600 Service Pack 2 CLR Version: 2.0.50727.3053 ( Net 2.0 )ProcessModel: Default DomainUsage: SingleExecution Runtime: net-2.0............Tests run: 12, Errors: 0, Failures: 0, Inconclusive: 0, Time: 14.170376seconds Not run: 0, Invalid: 0, Ignored: 0, Skipped: 0********* Done ********* As you can see from the project tree, you have some of the following packages: Search Crud Association The Search package shows you how to perform queries against the repository. The Crud package contains samples related to all the CRUD operations that show you how to perform basic operations; namely, how to create/get/update/remove nodes in the repository. The Association package shows you how to create and remove association instances among nodes. Searching the repository Once you have authenticated a user, you can start to execute queries against the repository. In the following sample code, we will see how to perform a query using the RepositoryService of Alfresco: RepositoryService repositoryService = WebServiceFactory.getRepositoryService(); Then we need to create a store where we would like to search contents: Store spacesStore = new Store(StoreEnum.workspace, "SpacesStore"); Now we need to create a Lucene query. In this sample, we want to search the Company Home space, and this means that we have to execute the following query: String luceneQuery = "PATH:"/app:company_home""; In the next step, we need to use the query method available from the RepositoryService. In this way, we can execute the Lucene query and we can get all the results from the repository: Query query =new Query(Constants.QUERY_LANG_LUCENE, luceneQuery);QueryResult queryResult =repositoryService.query(spacesStore, query, false); You can retrieve all the results from the queryResult object, iterating the ResultSetRow object in the following way: ResultSet resultSet = queryResult.resultSet;ResultSetRow[] results = resultSet.rows;//your custom listIList<CustomResultVO> customResultList =new List<CustomResultVO>();//retrieve results from the resultSetforeach(ResultSetRow resultRow in results){ ResultSetRowNode nodeResult = resultRow.node; //create your custom value object CustomResultVO customResultVo = new CustomResultVO(); customResultVo.Id = nodeResult.id; customResultVo.Type = nodeResult.type; //retrieve properties from the current node foreach(NamedValue namedValue in resultRow.columns) { if (Constants.PROP_NAME.Equals(namedValue.name)) { customResultVo.Name = namedValue.value; } else if (Constants.PROP_DESCRIPTION.Equals(namedValue.name)) { customResultVo.Description = namedValue.value; } } //add the current result to your custom list customResultList.Add(customResultVo);} In the last sample, we iterated all the results and we created a new custom list with our custom value object CustomResultVO. More information about how to build Lucene queries can be found at this URL: http://wiki.alfresco.com/wiki/Search. Performing operations We can perform various operations on the repository. They are documented as follows: Authentication For each operation, you need to authenticate users before performing all the required operations on nodes. The class that provides the authentication feature is named AuthenticationUtils, and it allows you to invoke the startSession and endSession methods: String username = "johndoe";String password = "secret";AuthenticationUtils.startSession(username, password);try{}finally{ AuthenticationUtils.endSession();} Remember that the startSession method requires the user credentials: the username as the first argument and the password as the second. Notice that the default endpoint address of the Alfresco instance is as follows: http://localhost:8080/alfresco If you need to change the endpoint address, you can use the WebServiceFactory class invoking the setEndpointAddress method to set the new location of the Alfresco repository.
Read more
  • 0
  • 0
  • 3577
article-image-structure-content-your-plone-site
Packt
15 Oct 2009
6 min read
Save for later

Structure the Content on your Plone Site

Packt
15 Oct 2009
6 min read
(For more resources on Plone, see here.) Real world information architecture tips Based on what your users need and/or want to see, you need to structure your content within topics, or high-level containers that are typically content-specific sections. As an example, we will take a look at http://plone.org. When visitors enter a Plone site, no matter how deep they go, the navigation tends to stay the same. The following screenshot shows that a visitor is in the Documentation section of the site, with the opportunity to drill down within this section for additional documentation topics: By default, Plone has a portlet that shows the navigation aids on the left-hand side of the browser, which helps the visitors navigate within the subject matter. In this example, there are several subsections below Development. Structuring your content When planning your site, you must first decide how you want to structure your content. The structuring can be worked out through brainstorming sessions with other people involved with your site, in order to come up with a structure suits your business objectives. Investigating other sites that share your organization's model could be a good starting point towards developing your final solution. To really understand how Plone can be an effective solution for your content delivery needs, we will take a look at how to implement Plone for a High School web site. In this type of structure, you will see how some content is targeted at all users, while other content is tailored to specific users. We will use the following high-level topics for demonstration purposes: Home News Events Academics Sports Clubs PTO (Parent-Teacher Organization) Alumni In order to create these sections, we will first create folders for the above sections, into which you will add content. Each of the above sections will be visible in your top-level navigation. Within each top-level folder, we will also create subfolders to help you to structure your content. To create a folder, go to your homepage, select Add new... and choose the Folder option from the drop-down list, as shown in the following screenshot: Specify the Title and the optional Description. In this case, we will create a folder for the Academics section: We're going to just keep the defaults here; we will cover the Settings tab shortly. Click on Save, and then make sure that your folder has been published: Now take a look at the overall navigation structure: There is now a new tab in your navigation bar, which represents a container for holding all of the content that will be part of the academics section of the site. You will follow the same process to create the rest of the top-level tabs. First, we will need to make a change to the default tab behavior in Plone. Specifically, we want to remove Users as a top-level navigation item. Removing it from the tab navigation does not mean that it no longer exists; we're just making sure that items that are more important to this specific site are shown to the visitors and users. To remove Users from the navigation bar, click on the Users tab, and then select Edit. Once you are in Edit mode, there is the section where you can select Settings. You can then select the Exclude from navigation checkbox. After saving your changes, you can see that the tab Users is no longer part of your navigation: Using the same process for adding new folders, we'll add Sports, Clubs and PTO. We end up with the following: Now that we have the top-level structure in place, we can focus on what will need to go within each topic. The process is similar, with the difference being that you need to be within the given topic before creating the next level of folders. When you create folders in the Home section, you have the ability to create top-level tabs. Creating folders within the other top-level folders you create allows you to be more specific for the given topic. We will use the example of the Sports top-level tab for creating an additional folder/site structure. We will need to create the following sub-folders: Football Basketball Soccer Track and Field Lacrosse Baseball Softball To do so, we must drill down into the Sports folder and add new folders within it. Once you have added these folders under the Sports section, the Navigation to the new folders is available in the leftmost side of your browser window: Note that the navigation shows only the contents of the current folder. This can be adjusted via the Manage portlets link, which is available on the home page, below the left and right columns. This link is also accessible via http://www.mysite.com/@@manage-portlets, where www.mysite.com is the name of your Plone site. Simply set the Start Level to 0 and save your changes. Now that the structure for the Sports folder is in place, let's take a look at how you can change the order of display of the folders. If the football season is over, it may make sense to move this category to the bottom of the navigation. To change the order of the Football folder, go to the Contents view under Sports, then click in the Order column for the Football row. The row will turn yellow, and the cursor will change to a four-headed arrow, which indicates that the content object can be moved. Drag the row up or down in the list, to the desired location. Now, when you click on the top level of Sports, the navigation listing appears in the new location that you have just defined: Now, let's take the new folder structure created under the Sports section, and create some more folders that are specific to each sub topic. Select a folder, and then go to the Contents tabbed page. In this example, we will create the following folders under the Soccer folder, which is under the Sports folder: Varsity Boys Girls Junior Varsity Boys Girls Boosters As identified in the preceding screenshot, the breadcrumbs navigation shows the progression through the site. You can also see how the navigation within the Sports section can grow to fit specific content. By understanding these concepts that apply creating folders for your navigation structure, you will be well on your way to having consistent navigation throughout your site.
Read more
  • 0
  • 0
  • 3576

article-image-using-templates-display-channel-content-expressionengine
Packt
23 Sep 2010
7 min read
Save for later

Using Templates to Display Channel Content in ExpressionEngine

Packt
23 Sep 2010
7 min read
  Building Websites with ExpressionEngine 2 A step-by-step guide to ExpressionEngine: the web-publishing system used by top designers and web professionals everywhere Learn all the key concepts and terminology of ExpressionEngine: channels, templates, snippets, and more Use RSS to make your content available in news readers including Google Reader, Outlook, and Thunderbird Manage your ExpressionEngine website, including backups, restores, and version updates Written in an easy-to-follow step-by-step style, with plenty of examples and exercises Read more about this book (For more resources on ExpressionEngine, see here.) Creating templates To start with, build two templates: one for the CSS stylesheet and one that contains the HTML that defines the page structure and brings in your channel content. Since the CSS template will be used all over your website, it makes sense to put this in a separate template group called includes (which you will create). For the page itself, use the index template in the site template group. In the control panel, click on Design | Templates | Template Manager from the top menu. Then select the New Group button, located above the list of existing template groups. Call the new template group includes. Do not duplicate a group and do not make the index template your site's home page. Click Submit. Back on the Template Management screen, make sure the includes template group is selected, and then click on New Template. Call the new template site_css and select a Template Type of CSS. Leave the radio button checked to create an empty template and click Create and Edit. From the Ed & Eg site that you downloaded and extracted earlier, open style.css in a text editor such as Notepad. Copy and paste the entire file into the includes/site_css template and click on Update. Within the stylesheet, there are several references to images in the image directory. For the style to render properly, you will also need to upload all the images in the /images sub-directory (including money.jpg) to the /images sub-directory on your website. After uploading all the images, you will also need to update the paths in the stylesheet to point to this sub-directory. Within the site_css template, wherever you see url(images/imgxx.jpg), change it so that it reads url(http://localhost/images/imgxx.jpg) (replacing http://localhost/ with your website domain if you are not using a localhost environment). There should be 10 replacements in total (one for each image). When you are done, click on Update and Finished. Next, on the Template Management screen, highlight the site template group and then select the index template. If you do not have such a template group and template then go ahead and create it now. Delete everything currently in the template. Open index.html of the static Ed & Eg website in a text editor such as Notepad. Copy and paste the entire source code into the template. Since the stylesheet is no longer located in style.css, this path needs to be updated. To do this, use the ExpressionEngine stylesheet variable to indicate the includes template group followed by the site_css template that the CSS stylesheet is in. Change the line: <link href="style.css" rel="stylesheet" type="text/css"media="screen" /> to read: <link href="{stylesheet=includes/site_css}" rel="stylesheet"type="text/css" media="screen" /> Finally, click Update to save the template and browse to http://localhost/site to view the output of the template as it stands right now. It should look identical to the static index.html page (except that in ExpressionEngine, none of the links will work because you have only created one page so far). If you did not hide your index.php file as part of installing ExpressionEngine, remember that your ExpressionEngine URLs will include the additional index.php (for example, http://localhost/site will become http://localhost/index.php/site for you). Did you spot the deliberate mistake? Although, at this point, everything looks good, the content being displayed in this URL is not from your channel at all, but is what you copied and pasted from the index.html file into your site/index template. The next step is to replace this static content with the content from the website channel. Pointing your template to your channel Pointing your template to use your channel content is the step that links together everything you have done so far (creating custom fields, creating the channel, publishing content to the channel, and creating templates). In the control panel, click on Design | Templates | Template Manager from the top menu. Then select the site template group and click to edit the index template. Delete all the code from after the <div id="content"> tag to the closing </div> tag (leave these two tags in place though). Underneath the <div id="content"> line, add the following. This code says that you would like to display content from the website channel (but only one entry and only the entry with a URL title of welcome_to_our_website). {exp:channel:entries channel="website" limit="1" url_title="welcome_to_our_website"} Next, add the following line. This says that you no longer want content from the website channel. {/exp:channel:entries} In between the opening {exp:channel:entries} and closing {/exp:channel:entries} tags, add the following code. This displays the title from your entry as an <h1> header. <h1>{title}</h1> Underneath the title, add the following code to place the image from your channel entry onto the page. The {if website_image} statement means that if there is no image defined in the channel entry, do not display the img code at all. {if website_image}<img src="{website_image}"class="left" />{/if} Finally, add the following tag to display the content of your content field: {website_content} The content section should now look like: <div id="content"> {exp:channel:entries channel="website" limit="1" url_title="welcome_to_our_website"} <h1>{title}</h1> {if website_image}<img src="{website_image}"class="left" />{/if} {website_content} {/exp:channel:entries} </div> <!-- end #content --> Finally, update the page title to reflect the entry title. To do this, replace the line <title> Ed & Eg Financial Advisors </title> with the following code. Although it looks complicated, it's actually the same as the }exp:channel:entries} code in the steps above, except that all you are displaying is the {title} field and not any of the other custom fields you created. By default, the {exp:channel:entries} tag requests a lot of information from your database, which can increase the amount of time it takes to display your page. Since you are only displaying one field, the disable parameter tells ExpressionEngine not to request other information you know you do not need (including the data in your custom fields). For more information on this parameter, you can visit http://expressionengine.com/user_guide/modules/channel/parameters.html#par_disable <title>{exp:channel:entries channel="website" limit="1"url_title="welcome_to_our_website" disable="categories|category_fields|custom_fields|member_data|pagination"}{title}{/exp:channel:entries} - Ed &amp; Eg Financial Advisors</title> Click Update to save your changes and then browse to http://localhost/site to view your updated website. If everything is well, then you should not notice much difference at all, but behind the scenes, your content is now coming from your channel entry, rather than being part of your template.
Read more
  • 0
  • 0
  • 3572

article-image-creating-quiz-moodle
Packt
29 Mar 2011
16 min read
Save for later

Creating a quiz in Moodle

Packt
29 Mar 2011
16 min read
Getting started with Moodle tests To start with, we need to select a topic or theme for our test. We are going to choose general science, since the subject matter will be easy to incorporate each of the item types we have seen previously. Now that we have an idea of what our topic is going to be, we will get started in the creation of the test. We will be creating all new questions for this test, which will give us the added benefit of a bit more practice in item creation. So, let's get started and work on making our first real test! Let's open our Moodle course, go to the Activity drop-down, and select Create a new Quiz. Once it has been selected, we will be taken to the Quiz creation page and we'll be looking at the General section. The General section Here need to give the test a name that describes what the test is going to cover. Let's call it 'General Science Final Exam' as it describes what we will be doing in the test. The introduction is also important.this is a test students will take and an effective description of what they will be doing is an important point for them. It helps get their minds thinking about the topic at hand, which can help them prepare, and a person who is prepared can usually perform better. For our introduction, we will write the following, 'This test will see how much you learned in our science class this term. The test will cover all the topics we have studied, including, geology, chemistry, biology, and physics. In this test, there are a variety of question types (True/False, Matching, and others). Please look carefully at the sample questions before you move on. If you have any questions during the test, raise your hand. You will have 'x' attempts with the quiz. We have now given the test an effective name and we have given the students a description of what the test will cover. This will be shown in the Info tab to all the students before they take the test, and if we want in the days running up to the test. That's all we need to do in this section. Timing In this section, we need to make some decisions about when we are going to give the test to the students. We will also need to make a decision about how long we will give the students to complete the test. These are important decisions, and we need to make sure we give our students enough time to complete the test. The default Timing section is shown in the next screenshot: We probably know when our final exam will be. So, when we are creating the test, we can set the date that the test will be available to the students and the date it will stop being accessible to them. Because this is our final exam, we only want it to be available for one day, for a specified time period. We will start by clicking on the Disable checkboxes next to Open the Quiz and Close the Quiz dates. This step will enable the date/time drop-down menus and allow us to set them for the test. For us, our test will start on March 20, 2010 at 16:55 p.m. and it will end the same day, one hour later. So we will change the appropriate menus to reflect our needs. If these dates are not set, a student in the course will be able to take the quiz any time after you finish creating it. We will need to give the students time to get in class, settle down, and have their computers ready. However, we also need to make sure the students finish the test in our class, so we have decided to create a time limit of 45 minutes. This means that the test will be open for one hour, and in that one hour time frame, once they start the test, they will have 45 minutes to finish it. To do this, we need to click on the Enable checkbox next to the Time Limit (minutes) textbox. Clicking on this will enable the textbox, and in it we will enter 45. This value will limit the quiz time to 45 minutes, and will show a floating, count-down timer in the test, causing it to auto-submit 45 minutes after it is started. It is good to note that many students get annoyed by the floating timer and its placement on the screen. The other alternative is to have the test proctor have the students submit the quiz at a specified time. Now, we have decided to give a 45 minute time limit on the test, but without any open-ended questions, the test is highly unlikely to take that long. There is also going to be a big difference in the speed at which different students work. The test proctor should explain to the students how much time they should spend on each question and reviewing their answers. Under the Time Limit (minutes) we see the Time delay between first and second attempt and Time delay between later attempts menus. If we are going to offer the test more than once, we can set these, which would force the students to wait until they could try again. The time delays range from 30 minutes to 7 days, and the None setting will not require any waiting between attempts on the quiz. We are going to leave these set to None because this is a final exam and we are only giving it once. Once all the information has been entered into the Timing section, this dialog box is what we have, as shown in the next screenshot: Display Here, we will make some decisions about the way the quiz will look to the students. We will be dividing questions over several pages, which we will use to create divisions in the test. We will also be making decisions about the shuffle questions and shuffle within questions here. Firstly, as the test creators, we should already have a rough idea of how many questions we are going to have on the test. Looking at the Questions Per Page drop-down menu, we have the option of 1 to 50 questions per page. We have decided that we will be displaying six questions per page on the test. Actually, we will only have five questions the students will answer, but we also want to include a description and a sample question for the students to see how the questions look and how to answer them' thus we will have six on each page. We have the option to shuffle questions within pages and within questions. By default, Shuffle Questions is set to No and Shuffle within Questions is set to Yes. We have decided that we want to have our questions shuffled. But wait, we can't because we are using Description questions to give examples, and if we chose shuffle, these examples would not be where they need to be. So, we will leave the Shuffle Questions setting at the default No. However, we do want to shuffle the responses within the question, which will give each student a slightly different test using the same questions and answers. When the display settings are finished, we can see the output shown in the next screenshot: Attempts In this section, we will be setting the number of attempts possible and how further attempts are dealt with. We will also make a decision about the Adaptive Mode. Looking at the Attempts allowed drop-down menu, we have the option to set the number from 1 to 10 or we can set it to Unlimited attempts. For our test, we have already decided to set the value to 1 attempt, so we will select 1 from the drop-down menu. We have the option of setting the Each Attempt Builds on the Last drop-down menu to Yes or No. This feature does nothing now, because we have only set the test to have a single attempt. If we had decided to allow multiple attempts, a Yes setting would have shown the test taker all the previous answers, as if the student were taking the test again, as well as indicating whether he or she were correct or not. If we were giving our students multiple attempts on the test, but we did not want them to see their previous answers, we would set this to No. We are also going to be setting Adaptive mode to No. We do not want our students to be able to immediately see or correct their responses during the test; we want the students to review their answers before submitting anything. However, if we did want the students to check their answers and correct any mistakes during the test, we would set the Attempts Allowed to a number above 1 and the Adaptive Mode to Yes, which would give us the small Submit button where the students would check and correct any mistakes after each question. If multiple attempts are not allowed, the Submit button will be just that, a button to submit your answer. Here is what the Attempts section looks like after we have set our choices: Grades In this section, we will set the way Moodle will score the student. We see three choices in this section, Grading method, Apply penalties, and Decimal digits in grades; however, because we have only selected a single attempt, two of these options will not be used. Grading Method allows us to determine which of the scores we want to give our student after multiple tries. We have four options here: Highest Grade, Average Grade, First Attempt, and Last Attempt. Highest Grade uses the highest grade achieved from any attempt on any individual question. The Average Grade will take the total number of tries and grades and average them. The First Attempt will use the grade from the first attempt and the Last Attempt will use the grade from the final attempt. Since we are only giving one try on our test, this setting has no function and we will leave it set at its default, Highest Grade, because either option would give the same result. Apply penalties is similar to Grading method, in that it does not function because we have turned off Adaptive Mode. If we had set Adaptive Mode to Yes, then this feature would give us the option of applying penalties, which are set in the individual question setup pages. If we were using Adaptive Mode and this option feature set to No, then there would be no penalties for mistakes as in previous attempts. If it were set to Yes, the penalty amount decided on in the question would be subtracted for each incorrect response from the total points available on the question. However, our test is not set to Adaptive Mode, so we will leave it at the default setting, Yes. It is important to note here that no matter how often a student is penalized for an incorrect response, their grade will never go below zero. The Decimal digits in grades shows the final grade the student receives with the number of decimal places selected here. There are four choices available in this setting: 0, 1, 2, and 3. If, for example, the number is set to 1, the student will receive a score calculated to 1 decimal place, and the same follows for 2 and 3. If the number is set to 0, the final score will be rounded. We will set our Decimal digits in grades to 0. After we have finished, the Grades section appears as shown in the next screenshot: Review options This sectopm is where we set when and what our students will see when they look back at the test. There are three categories: Immediately after the attempt; Later, while quiz is still open; and After the quiz is closed. The first category, Immediately after the attempt, will allow students to see whatever feedback we have selected to display immediately after they click on the Submit all and finish button at the end of the test, or Submit, in the case of Adaptive mode. The second category, Later, while quiz is still open, allows students to view the selected review options any time after the test is finished, that is, when no more attempts are left, but before the test closes. Using the After the quiz is closed setting will allow the student to see the review options after the test closes, meaning that students are no longer able to access the test because a close date was set. The After the quiz is closed option is only useful if a time has been set for the test to close, otherwise the review never happens because the test doesn't ever close. Each of these three categories contains the same review options: Responses, Answers, Feedback, General feedback, Scores, and Overall feedback.Here is what these options do: Responses are the student's response to the question and whether he or she were wrong or correct. Answers are the correct response to the question. Feedback is the feedback you enter based on the answer the student gives. This feedback is different from the General quiz feedback they may receive. General feedback are the comments all students receive, regardless of their answers. Scores are the scores the student received on the questions. Overall feedback are the comments based on the overall grade on the test. We want to give our students all of this information, so they can look it over and find out where they made their mistakes, but we don't want someone who finishes early to have access to all the correct answers. So, we are going to eliminate all feedback on the test until after it closes. That way there is no possibility for the students to see the answers while other students might still be taking the test. To do remove such feedback, we simply unclick all the options available in the categories we don't want. Here is what we have when we are finished: Regardless of the options and categories we select in the Review options, students will always be able to see their overall scores. Looking at our settings, the only thing a student will be able to view immediately after the test is complete is the score. Only after the test closes, will the student be able to see the full range of review material we will be providing. If we had allowed multiple attempts, we would want to have different settings. So, instead of After the quiz is closed, we would want to set our Review options to Immediately after the attempt, because this setting would let the student know where he or she had problems and which areas of the quiz need to be focussed on. One final point here is that even a single checkbox in any of the categories will allow the student to open and view the test, giving the selected review information to the student. This option may or may not be what you want. Be careful to ensure that you have only selected the options and categories you want to use. Security This section is where we can increase quiz security, but it is important to note that these settings will not eliminate the ability of tech-savvy students to cheat. What this section does is provide a few options that make cheating a bit more difficult to do. We have three options in this section: Browser security, Require password, and Require network address. The Browser security drop-down has two options: None and Full screen popup with some JavaScript security. The None option is the default setting and is appropriate for most quizzes. This setting doesn't make any changes in browser security and is the setting you will most likely want to use for in-class quizzes, review quizzes, and others. Using the fullscreen option will create a browser that limits the options for students to fiddle things. This option will open a fullscreen browser window with limited navigation options. In addition to limiting the number of navigation options available, this option will also limit the keyboard and mouse commands available. This option is more appropriate for high-stakes type tests and shouldn't be used unless there is a reason. This setting also requires that JavaScript is used. Browser security is more a safety measure against students pressing the wrong button than preventing cheating, but can help reduce it. The Require password does exactly what you think it would. It requires the students to enter a password before taking the test. To keep all your material secure, I recommend using a password for all quizzes that you create. This setting is especially important if you are offering different versions of the quiz to different classes or different tests in the same class and you want to make sure only those who should be accessing the quiz can. There is also an Unmask checkbox next to the password textbox. This option will show you the password, just in case you forget! Finally, we have the Require network address option, which will only allow those at certain IP Addresses to access the test. These settings can be useful to ensure that only students in the lab or classroom are taking the test. This setting allows you to enter either complete IP Addresses (for example. 123.456.78.9), which require that specific address to begin the test; partial IP Addresses (for example 123.456), which will accept any address as long as it begins with the address prefixes; and what is known as Classless Inter-Domain Routing (CIDR) notation, (for example 123.456.78.9/10), which only allows specific subnets. You might want to consult with your network administrator if you want to use this security option. By combining these settings, we can attempt to cut down on cheating and improper access to our test. In our case here, we are only going to use the fullscreen option. We will be giving the test in our classroom, using our computers, so there is no need to turn on the IP Address function or require a password. When we have finished, the Security section appears as shown in the next screenshot:
Read more
  • 0
  • 0
  • 3559
Packt
03 Jun 2015
9 min read
Save for later

Microsoft Azure – Developing Web API for Mobile Apps

Packt
03 Jun 2015
9 min read
Azure Websites is an excellent platform to deploy and manage the Web API, Microsoft Azure provides, however, another alternative in the form of Azure Mobile Services, which targets mobile application developers. In this article by Nikhil Sachdeva, coauthor of the book Building Web Services with Microsoft Azure, we delve into the capabilities of Azure Mobile Services and how it provides a quick and easy development ecosystem to develop Web APIs that support mobile apps. (For more resources related to this topic, see here.) Creating a Web API using Mobile Services In this section, we will create a Mobile Services-enabled Web API using Visual Studio 2013. For our fictitious scenario, we will create an Uber-like service but for medical emergencies. In the case of a medical emergency, users will have the option to send a request using their mobile device. Additionally, third-party applications and services can integrate with the Web API to display doctor availability. All requests sent to the Web API will follow the following process flow: The request will be persisted to a data store. An algorithm will find a doctor that matches the incoming request based on availability and proximity. Push Notifications will be sent to update the physician and patient. Creating the project Mobile Services provides two options to create a project: Using the Management portal, we can create a new Mobile Service and download a preassembled package that contains the Web API as well as the targeted mobile platform project Using Visual Studio templates The Management portal approach is easier to implement and does give a jumpstart by creating and configuring the project. However, for the scope of this article, we will use the Visual Studio template approach. For more information on creating a Mobile Services Web API using the Azure Management Portal, please refer to http://azure.microsoft.com/en-us/documentation/articles/mobile-services-dotnet-backend-windows-store-dotnet-get-started/. Azure Mobile Services provides a Visual Studio 2013 template to create a .NET Web API, we will use this template for our scenario. Note that the Azure Mobile Services template is only available from Visual Studio 2013 update 2 and onward. Creating a Mobile Service in Visual Studio 2013 requires the following steps: Create a new Azure Mobile Service project and assign it a Name, Location, and Solution. Click OK. In the next tab, we have a familiar ASP.NET project type dialog. However, we notice a few differences from the traditional ASP.NET dialog, which are as follows:    The Web API option is enabled by default and is the only choice available    The Authentication tab is disabled by default    The Test project option is disabled    The Host in the cloud option automatically suggests Mobile Services and is currently the only choice Select the default settings and click on OK. Visual Studio 2013 prompts developers to enter their Azure credentials in case they are not already logged in: For more information on Azure tools for Visual Studio, please refer visit https://msdn.microsoft.com/en-us/library/azure/ee405484.aspx. Since we are building a new Mobile Service, the next screen gathers information about how to configure the service. We can specify the existing Azure resources in our subscription or create new from within Visual Studio. Select the appropriate options and click on Create: The options are described here: Option Description Subscription This lists the name of the Azure subscription where the service will be deployed. Select from the dropdown if multiple subscriptions are available. Name This is the name of the Mobile Services deployment, this will eventually become the root DNS URL for the mobile service unless a custom domain is specified. (For example, contoso.azure-mobile.net). Runtime This allows selection of runtime. Note that as of writing this book, only the .NET framework was supported in Visual Studio, so this option is currently prepopulated and disabled. Region Select the Azure data center where the Web API will be deployed. As of writing this book, Mobile Services is available in the following regions: West US, East US, North Europe, East Asia, and West Japan. For details on latest regional availability, please refer to http://azure.microsoft.com/en-us/regions/#services. Database By default, a SQL Azure database gets associated with every Mobile Services deployment. It comes in handy if SQL is being used as the data store. However, in scenarios where different data stores such as the table storage or Mongo DB may be used, we still create this SQL database. We can select from a free 20 MB SQL database or an existing paid standard SQL database. For more information about SQL tiers, please visit http://azure.microsoft.com/en-us/pricing/details/sql-database. Server user name Provide the server name for the Azure SQL database. Server password Provide a password for the Azure SQL database. This process creates the required entities in the configured Azure subscription. Once completed, we have a new Web API project in the Visual Studio solution. The following screenshot is the representation of a new Mobile Service project: When we create a Mobile Service Web API project, the following NuGet packages are referenced in addition to the default ASP.NET Web API NuGet packages: Package Description WindowsAzure MobileServices Backend This package enables developers to build scalable and secure .NET mobile backend hosted in Microsoft Azure. We can also incorporate structured storage, user authentication, and push notifications. Assembly: Microsoft.WindowsAzure.Mobile.Service Microsoft Azure Mobile Services .NET Backend Tables This package contains the common infrastructure needed when exposing structured storage as part of the .NET mobile backend hosted in Microsoft Azure. Assembly: Microsoft.WindowsAzure.Mobile.Service.Tables Microsoft Azure Mobile Services .NET Backend Entity Framework Extension This package contains all types necessary to surface structured storage (using Entity Framework) as part of the .NET mobile backend hosted in Microsoft Azure. Assembly: Microsoft.WindowsAzure.Mobile.Service.Entity Additionally, the following third-party packages are installed: Package Description EntityFramework Since Mobile Services provides a default SQL database, it leverages Entity Framework to provide an abstraction for the data entities. AutoMapper AutoMapper is a convention based object-to-object mapper. It is used to map legacy custom entities to DTO objects in Mobile Services. OWIN Server and related assemblies Mobile Services uses OWIN as the default hosting mechanism. The current template also adds: Microsoft OWIN Katana packages to run the solution in IIS Owin security packages for Google, Azure AD, Twitter, Facebook Autofac This is the favorite Inversion of Control (IoC) framework. Azure Service Bus Microsoft Azure Service Bus provides Notification Hub functionality. We now have our Mobile Services Web API project created. The default project added by Visual Studio is not an empty project but a sample implementation of a Mobile Service-enabled Web API. In fact, a controller and Entity Data Model are already defined in the project. If we hit F5 now, we can see a running sample in the local Dev environment: Note that Mobile Services modifies the WebApiConfig file under the App_Start folder to accommodate some initialization and configuration changes: {    ConfigOptions options = new ConfigOptions();      HttpConfiguration config = ServiceConfig.Initialize     (new ConfigBuilder(options)); } In the preceding code, the ServiceConfig.Initialize method defined in the Microsoft.WindowsAzure.Mobile.Service assembly is called to load the hosting provider for our mobile service. It loads all assemblies from the current application domain and searches for types with HostConfigProviderAttribute. If it finds one, the custom host provider is loaded, or else the default host provider is used. Let's extend the project to develop our scenario. Defining the data model We now create the required entities and data model. Note that while the entities have been kept simple for this article, in the real-world application, it is recommended to define a data architecture before creating any data entities. For our scenario, we create two entities that inherit from Entity Data. These are described here. Record Record is an entity that represents data for the medical emergency. We use the Record entity when invoking CRUD operations using our controller. We also use this entity to update doctor allocation and status of the request as shown: namespace Contoso.Hospital.Entities {       /// <summary>    /// Emergency Record for the hospital    /// </summary> public class Record : EntityData    {        public string PatientId { get; set; }          public string InsuranceId { get; set; }          public string DoctorId { get; set; }          public string Emergency { get; set; }          public string Description { get; set; }          public string Location { get; set; }          public string Status { get; set; }           } } Doctor The Doctor entity represents the doctors that are registered practitioners in the area, the service will search for the availability of a doctor based on the properties of this entity. We will also assign the primary DoctorId to the Record type when a doctor is assigned to an emergency. The schema for the Doctor entity is as follows: amespace Contoso.Hospital.Entities {    public class Doctor: EntityData    {        public string Speciality{ get; set; }          public string Location { get; set; }               public bool Availability{ get; set; }           } } Summary In this article, we looked at a solution for developing a Web API that targets mobile developers. Resources for Article: Further resources on this subject: Security in Microsoft Azure [article] Azure Storage [article] High Availability, Protection, and Recovery using Microsoft Azure [article]
Read more
  • 0
  • 0
  • 3558

article-image-introduction-parallel-programming-and-cuda-sample-code
Packt
16 Sep 2010
3 min read
Save for later

Introduction to Parallel Programming and CUDA with Sample Code

Packt
16 Sep 2010
3 min read
To give an example, let’s say we have an array that contains thousands of floating-point integers and each value needs to be run through a lengthy algorithm. Instead of running each value through the algorithm consecutively (i.e. one at a time), parallelism allows multiple values to be processed simultaneously (i.e. running many values through the algorithm at the same time), reducing overall processing time and producing fast and accurate results. There are some restrictions with using parallelism and not every program can be done in parallel. For instance, let’s say we have that same program from before but this time as we process a value we want to then check the currently processed value against all the previously calculated values in the array, before going to the next. We can confidently say all previous values in the array have been processed and are available to be accessed for the check. If we tried to do this in parallel, we could have incorrect data because multiple values are calculated at the same time and some may be ready for checking while others are not. Extra checks and steps are needed to prevent these types of concurrency issues. However, the results could still prove to be worth the extra steps. One of the major breakthroughs in parallel programming technology today goes beyond the scope of just multi-core CPU’s. Although they do offer a lot more power and potential than single-core units, another common computer component, the GPU, offers even more power, and NVIDIA’s flagship product, called CUDA, offers this technology to all developers easily and for free. CUDA was developed by NVIDIA to provide simple access to GPGPU (General-Purpose Computation on Graphics Processing Units) and parallel computing on their own GPU’s. The logic behind the idea is that GPU’s have much more processing power than CPU’s and have numerous cores that operate in parallel to run intensive graphics operations. By allowing developers to utilize this power for their own projects, it can create fast solutions to some heavy and time-consuming programs, specifically those that run the same process recursively and independently of other processes. The learning curve is not very steep for most developers. CUDA accomplishes making GPGPU easily usable by adding functionality to the standard C and C++ programming languages. This allows for fast adoption by almost any programmer and helps with cross-platform integration. To get started with CUDA, you will need a recent NVIDIA GPU (Geforce 8 series and beyond, or you can check on the NVIDIA website to see which GPU’s are CUDA enabled). CUDA works on Windows, Mac OSX, and certain Linux distributions. You will need to download and install the developer drivers, the CUDA toolkit, and the CUDA SDK off the Nvidia website, respectively. NVIDIA provides an installation guide on their website which provides more details about the installation process, as well as a method of checking the installation to see if it is working.
Read more
  • 0
  • 0
  • 3555
Modal Close icon
Modal Close icon