Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-moodle-history-teaching-using-chats-books-and-plugins
Packt
29 Jun 2011
4 min read
Save for later

Moodle: History Teaching using Chats, Books and Plugins

Packt
29 Jun 2011
4 min read
The Chat Module Students naturally gravitate towards the Chat module in Moodle. It is one of the modules that they effortlessly use whilst working on another task. I often find that they have logged in and are discussing work related tasks in a way that enables them to move forward on a particular task. Another use for the Chat module is to conduct a discussion outside the classroom timetabled lesson when students know that you are available to help them with issues. This is especially relevant to students who embark on study leave in preparation for examinations. It can be a lonely and stressful period. Knowing that they can log in to a chat that has been planned in advance means that they can prepare issues that they wish to discuss about their workload and find out how their peers are tackling the same issues. The teacher can ensure that the chat stays on message and provide useful input at the same time. Setting up a Chatroom We want to set up a chat with students who are on holiday but have some examination preparation to do for a lesson that will take place straight after their return to school. Ideally we would have informed the students prior to starting their holiday that this session would be available to anyone who wished to take part. Log in to the Year 7 History course and turn on editing In the Introduction section, click the Add an activity dropdown Select Chat Enter an appropriate name for the chat Enter some relevant information in the Introduction text Select the date and time for the chat to begin Beside Repeat sessions select No repeats – publish the specified time only Leave other elements at their default settings Click Save changes The following screenshot is the result of clicking Add an activity from the drop-down menu: If we wanted to set up the chatroom so that the chat took place at the same time each day or each week then it is possible to select the appropriate option from the Repeat sessions dropdown. The remaining options make it possible for students to go back and view sessions that they have taken part in. Entering the chatroom When a student or teacher logs in to the course for the appointed chat they will see the chat symbol in the Introduction section. Clicking on the symbol enables them to enter the chatroom via a simple chat window or a more accessible version where checking the box ensures that only new messages appear on the screen as shown in the following screenshot: As long as another student or teacher has entered the chatroom, a chat can begin when users type a message and await a response. The Chat module is a useful way for students to collaborate with each other and with their teacher if they need to. It comes into its own when students are logging in to discuss how to make progress with their collaborative wiki story about a murder in the monastery or when students preparing for an examination share tips and advice to help each other through the experience. Collaboration is the key to effective use of the chat module and teachers need not fear its potential for timewasting if this point is emphasized in the activities that they are working on. Plugins A brief visit to www.moodle.org and a search for ‘plugins’ reveals an extensive list of modules that are available for use with Moodle but stand outside the standard installation. If you have used a blogging tool such as Wordpress you will be familiar with the concept of plugins. Over the last few years, developers have built up a library of plugins which can be used to enhance your Moodle experience. Every teacher has different ways of doing things and it is well worth exploring the plugins database and related forums to find out what teachers are using and how they are using it. There is for example a plugin for writing individual learning plans for students and another plugin called Quickmail which enables you to send an email to everyone on your course even more quickly than the conventional way. Installing plugins Plugins need to be installed and they need administrator rights to run at all. The Book module for example, requires a zip file to be downloaded from the plugins database onto your computer and the files then need to be extracted to a folder in the Mod folder of your Moodle’s software directory. Once it is in the correct folder, the administrator then needs to run the installation. Installation has been successful if you are able to log in to the course and see the Book module as an option in the Add a resource dropdown.
Read more
  • 0
  • 0
  • 4653

article-image-creating-customizing-and-assigning-portlets-automatically-plone-33
Packt
12 May 2010
5 min read
Save for later

Creating, Customizing, and Assigning Portlets Automatically for Plone 3.3

Packt
12 May 2010
5 min read
(For more resources on Plone, see here.) Introduction One of the major changes from Plone 2.5 to Plone 3.0 was the complete refactoring of its portlets engine. In Plone 2.5, left and right side portlets were managed from Zope Management Interface (ZMI) by setting two special properties: left_slots and right_slots. This was not so hard but was cumbersome. Any TALES path expression—ZPT macros typically—could be manually inserted in the above two properties and would be displayed as a bit of HTML to the final user. The major drawback for site managers with the old-style approach was not just the uncomfortable way of setting which portlets to display, but the lack of any configuration options for them. For example, if we wanted to present the latest published news items, we wouldn't have any means of telling how many of them to show, unless the portlet itself were intelligent enough to get that value from some other contextual property. But again, we were still stuck in the ZMI. Fortunately, this has changed enormously in Plone 3.x: Portlets can now be configured and their settings are mantained in ZODB. Portlets are now managed via a user-friendly, Plone-like interface. Just click on the Manage portlets link below each of the portlets columns (see the following screenshot). In the above screen, if we click on the News portlet link, we are presented with a special configuration form to choose the Number of items to display and the Workflow state(s) we want to consider when showing news items. If you have read previous chapters, you might be correctly guessing that Zope 3 components (zope.formlib mainly) are behind the portlets configuration forms. In the next sections, we'll look again at the customer's requirements: Advertisement banners will be located in several areas of every page Advertisement banners may vary according to the section of the website Creating a portlet package Once again, paster comes to the rescue. As in Creating a product package structure and Creating an Archetypes product with paster, we will use the paster command here to create all the necessary boilerplate (and even more) to get a fully working portlet. Getting ready As we are still at the development stage, we should run the following commands in our buildout's src folder: cd ./src How to do it... Run the paster command: We are going to create a new egg called pox.portlet.banner. The pox prefix is from PloneOpenX (the website we are working on) and it will be the namespace of our product. Portlets eggs usually have a nested portlet namespace as in plone.portlet.collection or plone.portlet.static. We have chosen the pox main namespace as in the previous eggs we have created, and the banner suffix corresponds to the portlet name. For more information about eggs and packages names read http://www.martinaspeli.net/articles/the-naming-ofthings-package-names-and-namespaces.If you want to add portlets to an existing package instead of creating a new one, the steps covered in this chapter should tell you all you need to know (the use of paster addcontent portlet local command will be of great help). In your src folder, run the following command: paster create -t plone3_portlet This paster command creates a product using the plone3_portlet template. When run, it will output some informative text, and then a short wizard will be started to select options for the package: Option Value Enter project name pox.portlet.banner Expert Mode? easy Version 1.0 Description Portlet to show banners Portlet Name Banner portlet Portlet Type BannerPortlet After selecting the last option, you'll get an output like this (a little longer actually): Creating directory ./pox.portlet.banner ... Recursing into +namespace_package+ Recursing into +namespace_package2+ Recursing into +package+ Recursing into profiles Creating ./pox.portlet.banner/pox/portlet/banner/profiles/ Recursing into default Creating ./pox.portlet.banner/pox/portlet/banner/profiles/default/ Copying metadata.xml_tmpl to ./pox.portlet.banner/pox/portlet/banner/profiles/default/metadata.xml Copying portlets.xml_tmpl to ./pox.portlet.banner/pox/portlet/banner/profiles/default/portlets.xml This tells us that even the GenericSetup extension profile has also been created by paster. This means that we can install the new Banner portlet product (as entered in the portlet_name option above). Install the product: To tell our Zope instance about the new product, we must update the buildout.cfg file as follows: [buildout]...eggs =... pox.portlet.banner...develop = src/pox.portlet.banner We can automatically install the product during buildout. Add a pox.portlet.banner line inside the products parameter of the [plonesite] part: [plonesite]recipe = collective.recipe.plonesite...products =... pox.portlet.banner Build your instance and, if you want to, launch it to see the new empty Banner portlet: ./bin/buildout./bin/instance fg Check the new portlet: After implementing the changes above, if you click on the Manage portlets link in the site's home page (or anywhere in the Plone site), you will see a new Banner portlet option in the drop-down menu. A new box in the portlet column will then be shown. The Header/Body text/Footer box shown above matches the template definition in the bannerportlet.pt file the way paster created it.
Read more
  • 0
  • 0
  • 4652

article-image-getting-started-primefaces
Packt
04 Apr 2013
14 min read
Save for later

Getting Started with PrimeFaces

Packt
04 Apr 2013
14 min read
Setting up and configuring the PrimeFaces library PrimeFaces is a lightweight JSF component library with one JAR file, which needs no configuration and does not contain any required external dependencies. To start with the development of the library, all we need is to get the artifact for the library. Getting ready You can download the PrimeFaces library from http://primefaces.org/downloads.html and you need to add the primefaces-{version}.jar file to your classpath. After that, all you need to do is import the namespace of the library, which is necessary to add the PrimeFaces components to your pages, to get started. If you are using Maven (for more information on installing Maven, please visit http://maven.apache.org/guides/getting-started/maven-in-five-minutes.html), you can retrieve the PrimeFaces library by defining the Maven repository in your Project Object Model (POM) file as follows: <repository> <id>prime-repo</id> <name>PrimeFaces Maven Repository</name> <url>http://repository.primefaces.org</url> </repository> Add the dependency configuration as follows: <dependency> <groupId>org.primefaces</groupId> <artifactId>primefaces</artifactId> <version>3.4</version> </dependency> At the time of writing this book, the latest and most stable version of PrimeFaces was 3.4. To check out whether this is the latest available or not, please visit http://primefaces.org/downloads.html The code in this book will work properly with PrimeFaces 3.4. In prior versions or the future versions, some methods, attributes, or components' behaviors may change. How to do it... In order to use PrimeFaces components, we need to add the namespace declarations into our pages. The namespace for PrimeFaces components is as follows: For PrimeFaces Mobile, the namespace is as follows: That is all there is to it. Note that the p prefix is just a symbolic link and any other character can be used to define the PrimeFaces components. Now you can create your first page with a PrimeFaces component as shown in the following code snippet: <html > <f:view contentType="text/html"> <h:head /> <h:body> <h:form> <p:spinner /> </h:form> </h:body> </f:view> </html> This will render a spinner component with an empty value as shown in the following screenshot: A link to the working example for the given page is given at the end of this recipe. How it works... When the page is requested, the p:spinner component is rendered with the renderer implemented by the PrimeFaces library. Since the spinner component is a UI input component, the request-processing lifecycle will get executed when the user inputs data and performs a post back on the page. For the first page, we also needed to provide the contentType parameter for f:view, since the WebKit-based browsers, such as Google Chrome and Apple Safari, request the content type application/xhtml+xml by default. This would overcome unexpected layout and styling issues that might occur. There's more... PrimeFaces only requires Java 5+ runtime and a JSF 2.x implementation as mandatory dependencies. There are some optional libraries for certain features. Dependency Version Type Description JSF runtime iText Apache POI Rome commons-fileupload commons-io 2.0 or 2.1 2.1.7 3.7 1.0 1.2.1 1.4 Required Optional Optional Optional Optional Optional Apache MyFaces or Oracle Mojarra DataExporter (PDF) DataExporter (Excel) FeedReader FileUpload FileUpload Please ensure that you have only one JAR file of PrimeFaces or specific PrimeFaces Theme in your classpath in order to avoid any issues regarding resource rendering. Currently PrimeFaces supports the web browsers IE 7, 8, or 9, Safari, Firefox, Chrome, and Opera. PrimeFaces Cookbook Showcase application This recipe is available in the PrimeFaces Cookbook Showcase application on GitHub at https://github.com/ova2/primefaces-cookbook. You can find the details there for running the project. When the server is running, the showcase for the recipe is available at http://localhost:8080/primefaces-cookbook/views/chapter1 /yourFirstPage.jsf" AJAX basics with Process and Update PrimeFaces provides a partial page rendering (PPR) and view-processing feature based on standard JSF 2 APIs to enable choosing what to process in the JSF lifecycle and what to render in the end with AJAX. PrimeFaces AJAX Framework is based on standard server-side APIs of JSF 2. On the client side, rather than using the client-side API implementations of JSF implementations, such as Mojarra and MyFaces, PrimeFaces scripts are based on the jQuery JavaScript library. How to do it... We can create a simple page with a command button to update a string property with the current time in milliseconds on the server side and an output text to show the value of that string property, as follows: <p:commandButton update="display" action="#{basicPPRController. updateValue}" value="Update" /> <h:outputText id="display" value="#{basicPPRController.value}"/> If we would like to update multiple components with the same trigger mechanism, we can provide the IDs of the components to the update attribute by providing them a space, comma, or both, as follows: <p:commandButton update="display1,display2" /> <p:commandButton update="display1 display2" /> <p:commandButton update="display1,display2 display3" /> In addition, there are reserved keywords that are used for a partial update. We can also make use of these keywords along with the IDs of the components, as described in the following table: Keyword Description @this The component that triggers the PPR is updated @parent The parent of the PPR trigger is updated @form The encapsulating form of the PPR trigger is updated @none PPR does not change the DOM with AJAX response @all The whole document is updated as in non-AJAX requests We can also update a component that resides in a different naming container from the component that triggers the update. In order to achieve this, we need to specify the absolute component identifier of the component that needs to be updated. An example for this could be the following: <h:form id="form1"> <p:commandButton update=":form2:display" action="#{basicPPRController.updateValue}" value="Update" /> </h:form> <h:form id="form2"> <h:outputText id="display" value="#{basicPPRController.value}"/> </h:form> public String updateValue() { value = String.valueOf(System.currentTimeMillis()); return null; } PrimeFaces also provides partial processing, which executes the JSF lifecycle phases—Apply Request Values, Process Validations, Update Model, and Invoke Application—for determined components with the process attribute. This provides the ability to do group validation on the JSF pages easily. Mostly group-validation needs arise in situations where different values need to be validated in the same form, depending on an action that gets executed. By grouping components for validation, errors that would arise from other components when the page has been submitted can be overcome easily. Components like commandButton, commandLink, autoComplete, fileUpload, and many others provide this attribute to process partially instead of the whole view. Partial processing could become very handy in cases when a drop-down list needs to be populated upon a selection on another drop down and when there is an input field on the page with the required attribute set to true. This approach also makes immediate subforms and regions obsolete. It will also prevent submission of the whole page, thus this will result in lightweight requests. Without partially processing the view for the drop downs, a selection on one of the drop downs will result in a validation error on the required field. An example for this is shown in the following code snippet: <h:outputText value="Country: " /> <h:selectOneMenu id="countries" value="#{partialProcessingController. country}"> <f:selectItems value="#{partialProcessingController.countries}" /> <p:ajax listener= "#{partialProcessingController.handleCountryChange}" event="change" update="cities" process="@this"/> </h:selectOneMenu> <h:outputText value="City: " /> <h:selectOneMenu id="cities" value="#{partialProcessingController. city}"> <f:selectItems value="#{partialProcessingController.cities}" /> </h:selectOneMenu> <h:outputText value="Email: " /> <h:inputText value="#{partialProcessingController.email}" required="true" /> With this partial processing mechanism, when a user changes the country, the cities of that country will be populated in the drop down regardless of whether any input exists for the email field. How it works... As seen in partial processing example for updating a component in a different naming container, <p:commandButton> is updating the <h:outputText> component that has the ID display, and absolute client ID :form2:display, which is the search expression for the findComponent method. An absolute client ID starts with the separator character of the naming container, which is : by default. The <h:form>, <h:dataTable>, composite JSF components along with <p:tabView>, <p:accordionPanel>, <p:dataTable>, <p:dataGrid>, <p:dataList>, <p:carousel>, <p:galleria>, <p:ring>, <p:sheet>, and <p:subTable> are the components that implement the NamingContainer interface. The findComponent method, which is described at http://docs.oracle.com/javaee/6/api/javax/faces/component/UIComponent.html, is used by both JSF core implementation and PrimeFaces. There's more... JSF uses : (a colon) as the separator for the NamingContainer interface. The client IDs that will be rendered in the source page will be like :id1:id2:id3. If needed, the configuration of the separator can be changed for the web application to something other than the colon with a context parameter in the web.xml file of the web application, as follows: <context-param> <param-name>javax.faces.SEPARATOR_CHAR</param-name> <param-value>_</param-value> </context-param> It's also possible to escape the : character, if needed, in the CSS files with the character, as :. The problem that might occur with the colon is that it's a reserved keyword for the CSS and JavaScript frameworks, like jQuery, so it might need to be escaped. PrimeFaces Cookbook Showcase application This recipe is available in the PrimeFaces Cookbook Showcase application on GitHub at https://github.com/ova2/primefaces-cookbook. You can find the details there for running the project. For the demos of the showcase, refer to the following: Basic Partial Page Rendering is available at http://localhost:8080/ primefaces-cookbook/views/chapter1/basicPPR.jsf Updating Component in Different Naming Container is available at http://localhost:8080/primefaces-cookbook/views/chapter1/ componentInDifferentNamingContainer.jsf A Partial Processing example at http://localhost:8080/primefacescookbook/ views/chapter1/partialProcessing.jsf Internationalization (i18n) and Localization (L10n) Internationalization (i18n) and Localization (L10n) are two important features that should be provided in the web application's world to make it accessible globally. With Internationalization, we are emphasizing that the web application should support multiple languages; and with Localization, we are stating that the texts, dates, or any other fields should be presented in the form specific to a region. PrimeFaces only provides the English translations. Translations for the other languages should be provided explicitly. In the following sections, you will find the details on how to achieve this. Getting ready For Internationalization, first we need to specify the resource bundle definition under the application tag in faces-config.xml, as follows: <application> <locale-config> <default-locale>en</default-locale> <supported-locale>tr_TR</supported-locale> </locale-config> <resource-bundle> <base-name>messages</base-name> <var>msg</var> </resource-bundle> </application> A resource bundle would be a text file with the .properties suffix that would contain the locale-specific messages. So, the preceding definition states that the resource bundle messages_{localekey}.properties file will reside under classpath and the default value of localekey is en, which is English, and the supported locale is tr_TR, which is Turkish. For projects structured by Maven, the messages_{localekey}.properties file can be created under the src/main/resources project path. How to do it... For showcasing Internationalization, we will broadcast an information message via FacesMessage mechanism that will be displayed in the PrimeFaces growl component. We need two components, the growl itself and a command button, to broadcast the message. <p:growl id="growl" /> <p:commandButton action="#{localizationController.addMessage}" value="Display Message" update="growl" /> The addMessage method of localizationController is as follows: public String addMessage() { addInfoMessage("broadcast.message"); return null; } That uses the addInfoMessage method, which is defined in the static MessageUtil class as follows: public static void addInfoMessage(String str) { FacesContext context = FacesContext.getCurrentInstance(); ResourceBundle bundle = context.getApplication(). getResourceBundle(context, "msg"); String message = bundle.getString(str); FacesContext.getCurrentInstance().addMessage(null, new FacesMessage(FacesMessage.SEVERITY_INFO, message, "")); } Localization of components, such as calendar and schedule, can be achieved by providing the locale attribute. By default, locale information is retrieved from the view's locale and it can be overridden by a string locale key or the java.util.Locale instance. Components such as calendar and schedule use a shared PrimeFaces.locales property to display labels. PrimeFaces only provides English translations, so in order to localize the calendar we need to put corresponding locales into a JavaScript file and include the scripting file to the page. The content for the German locale of the Primefaces.locales property for calendar would be as shown in the following code snippet. For the sake of the recipe, only the German locale definition is given and the Turkish locale definition is omitted. PrimeFaces.locales['de'] = { closeText: 'Schließen', prevText: 'Zurück', nextText: 'Weiter', monthNames: ['Januar', 'Februar', 'März', 'April', 'Mai', 'Juni', 'Juli', 'August', 'September', 'Oktober', 'November', 'Dezember'], monthNamesShort: ['Jan', 'Feb', 'Mär', 'Apr', 'Mai', 'Jun', 'Jul', 'Aug', 'Sep', 'Okt', 'Nov', 'Dez'], dayNames: ['Sonntag', 'Montag', 'Dienstag', 'Mittwoch', 'Donnerstag', 'Freitag', 'Samstag'], dayNamesShort: ['Son', 'Mon', 'Die', 'Mit', 'Don', 'Fre', 'Sam'], dayNamesMin: ['S', 'M', 'D', 'M ', 'D', 'F ', 'S'], weekHeader: 'Woche', FirstDay: 1, isRTL: false, showMonthAfterYear: false, yearSuffix: '', timeOnlyTitle: 'Nur Zeit', timeText: 'Zeit', hourText: 'Stunde', minuteText: 'Minute', secondText: 'Sekunde', currentText: 'Aktuelles Datum', ampm: false, month: 'Monat', week: 'Woche', day: 'Tag', allDayText: 'Ganzer Tag' }; Definition of the calendar components with the locale attribute would be as follows: <p:calendar showButtonPanel="true" navigator="true" mode="inline" id="enCal"/> <p:calendar locale="tr" showButtonPanel="true" navigator="true" mode="inline" id="trCal"/> <p:calendar locale="de" showButtonPanel="true" navigator="true" mode="inline" id="deCal"/> They will be rendered as follows: How it works... For Internationalization of the Faces message, the addInfoMessage method retrieves the message bundle via the defined variable msg. It then gets the string from the bundle with the given key by invoking the bundle.getString(str) method. Finally, the message is added by creating a new Faces message with severity level FacesMessage.SEVERITY_INFO. There's more... For some components, Localization could be accomplished by providing labels to the components via attributes, such as with p:selectBooleanButton. <p:selectBooleanButton value="#{localizationController.selectedValue}" onLabel="#{msg['booleanButton.onLabel']}" offLabel="#{msg['booleanButton.offLabel']}" /> The msg variable is the resource bundle variable that is defined in the resource bundle definition in Faces configuration file. The English version of the bundle key definitions in the messages_en.properties file that resides under classpath would be as follows: booleanButton.onLabel=Yes booleanButton.offLabel=No PrimeFaces Cookbook Showcase application This recipe is available in the PrimeFaces Cookbook Showcase application on GitHub at https://github.com/ova2/primefaces-cookbook. You can find the details there for running the project. For the demos of the showcase, refer to the following: Internationalization is available at http://localhost:8080/primefacescookbook/ views/chapter1/internationalization.jsf Localization of the calendar component is available at http://localhost:8080/ primefaces-cookbook/views/chapter1/localization.jsf Localization with resources is available at http://localhost:8080/ primefaces-cookbook/views/chapter1/localizationWithResources. jsf For already translated locales of the calendar, see https://code.google.com/archive/p/primefaces/wikis/PrimeFacesLocales.wiki
Read more
  • 0
  • 0
  • 4641

article-image-documentaries-and-other-audio-visual-projects-celtx
Packt
10 Mar 2011
6 min read
Save for later

Documentaries and Other Audio-Visual Projects with Celtx

Packt
10 Mar 2011
6 min read
What is an audio-visual production? The term audio-visual production basically covers anything in the known universe that combines varying components of movement, sound, and light. Movies are nothing more than big expensive (really expensive) audio-visual shows. Television programs; the fireworks, performed music, and laser lights of a major rock concert; a business presentation; Uncle Spud showing slides of his vacation in Idaho—all are audio-visual productions. A complex audio-visual production, such as the big rock concert, combines many types of contents and is called a multimedia show, which combine sounds and music, projections of video and photos (often several at once), lights, spoken words, text on screens, and more. Audio visual shows, those of an educational nature as well as for entertainment value, might be produced with equipment such as the following: Dioramas Magic lanterns Planetarium Film projectors Slide projectors Opaque projectors Overhead projectors Tape recorders Television Video Camcorders Video projectors Interactive whiteboards Digital video clips Also productions such as TV commercials, instructional videos, those moving displays you see in airports, even the new digital billboards along our highways—all are audio-visual productions (even the ones without sound). My favorite type of production, documentaries (I've done literally hundreds of them), are audio-visual shows. A documentary is a nonfiction movie and includes newsreels, travel, politics, docudramas, nature films and animal films, music videos, and much more. In short, as we can see from the preceding discussion, you can throw just about everything into a production including your kitchen sink. Turn the faucet on and off while blasting inspiring music and hitting it with colored spotlights, and plumbers will flock to buy tickets to the show! Now, while just about every conceivable project falls into the audio-visual category, Celtx (as shown in the next screenshot) offers us specific categories that narrow the field down a little. The following screenshot from Celtx's splash page shows those categories. Film handles movies and television shows, Theatre (love that Canadian spelling, eh?) is for stage plays, Audio Play is designed for radio programs and podcasts, Storyboard is for visual planning, and Comic Book is for writing anything from comic strips to epic graphic novels. Text (not shown in the following screenshot) is the other project type that comes with Celtx and is great for doing loglines, synopses, treatments, outlines, and anything else calling for a text editor rather than a script formatter. Just about everything else can be written in an Audio-Visual project container! Let's think about that for a moment. This means that Audio-Visual is by far and away the most powerful project provided by Celtx. In the script element drop-down box, there are only five script elements—Scene Heading, Shot, Character, Dialog, and Parenthetical—whereas Film has eight! Yet, thanks to Celtx magic, these five elements, as I will show you in this article, are a lot more flexible than in Film and the other projects. It's pretty amazing. So, time to start an audio-visual project of our own. Starting an AV project in Celtx What better example to use than a short documentary on... wait for it... Celtx. This film I actually plan on producing and using to both promote Celtx (which certainly deserves letting people know about it) and also showing that this article is great for learning all this marvelous power of Celtx. The title: "Celtx Loves Indies." Indies is slang for independent producers. An independent producer is a company or quite often an individual who makes films outside Hollywood or Bollywood or any other studio system. Big studios have scores or even hundreds of people to do all those tasks needed in producing a film. Indies often have very few people, sometimes just one or two doing all the crewing and production work. Low budget (not spending too much money on making films) is our watchword. Celtx is perfect for indies—it is, as I point out in the documentary—like having a studio in a box! So, my example project for this chapter is how I set up "Celtx Loves Indies" in Celtx. Time for action — beginning our new AV project We start our project, as we did our spec script in the last chapter, by making a directory on our computer. Having a separate directory for our projects makes it a lot easier to organize and to find stuff when we need it. Therefore, I first create the new empty directory on my hard drive named Celtx Loves Indies, as shown in the following screenshot: Now, fire up Celtx. In a moment, we'll left click on Audio-Visual to open a project container that has an Audio-Visual script in it. However, first, since I have not mentioned it to date, look at the items outside the Project Templates and Recent Project boxes in the lower part of the splash page, as shown in the following screenshot: As Celtx is connected to the Internet, we get some information each time Celtx starts up from the servers at: . This information from online includes links to news, help features, ads for Celtx add-ons, and announcements. The big news here is that Celtx has added an app (application) to synchronize projects with iPhones and iPadsHowever, check these messages out each time you open Celtx. Next, we open an Audio-Visual project in Celtx. This gives us a chance to check out those five script elements we met earlier by left clicking on the downward arrow next to Scene Heading. In the next section, we'll examine each and use them. Time for action – setting up the container Continuing with our initial setup of the container for this project, rename the A/V Script in the Project Library. I renamed mine, naturally, Celtx Loves Indies. Also, remember we can have hundreds of files, directories, subdirectories, and so on in the Project Library—our research and more. This is why a Celtx project is really a container. Just right click on A/V Script, choose Rename... and type in the new title, as shown in the following screenshot: Left click on File at the top left of the Celtx screen, then on Save Project As... (or use the Ctrl+Shift+S key shortcut) to save the project into your new directory, all properly titled and ready for action, as shown in the following screenshot:
Read more
  • 0
  • 0
  • 4638

Packt
16 Dec 2010
15 min read
Save for later

Page Management – Part Two in CMS

Packt
16 Dec 2010
15 min read
  CMS Design Using PHP and jQuery Build and improve your in-house PHP CMS by enhancing it with jQuery Create a completely functional and a professional looking CMS Add a modular architecture to your CMS and create template-driven web designs Use jQuery plugins to enhance the "feel" of your CMS A step-by-step explanatory tutorial to get your hands dirty in building your own CMS         Read more about this book       (For more resources on this subject, see here.) Dates Dates are annoying. The scheme I prefer is to enter dates the same way MySQL accepts them—yyyy-mm-dd hh:mm:ss. From left to right, each subsequent element is smaller than the previous. It's logical, and can be sorted sensibly using a simple numeric sorter. Unfortunately, most people don't read or write dates in that format. They'd prefer something like 08/07/06. Dates in that format do not make sense. Is it the 8th day of the 7th month of 2006, or the 7th day of the 8th month of 2006, or even the 6th day of the 7th month of 2008? Date formats are different all around the world. Therefore, you cannot trust human administrators to enter the dates manually. A very quick solution is to use the jQuery UI's datepicker plugin. Temporarily (we'll remove it in a minute) add the highlighted lines to /ww.admin/pages/pages.js: other_GET_params:currentpageid }); $('.date-human').datepicker({ 'dateFormat':'yy-mm-dd' });}); When the date field is clicked, this appears: It's a great calendar, but there's still a flaw: Before you click on the date field, and even after you select the date, the field is still in yyyy-mm-dd format. While MySQL will thank you for entering the date in a sane format, you will have people asking you why the date is not shown in a humanly readable way. We can't simply change the date format to accept something more reasonable such as "May 23rd, 2010", because we would then need to ensure that we can understand this on the server-side, which might take more work than we really want to do. So we need to do something else. The datepicker plugin has an option which lets you update two fields at the same time. This is the solution—we will display a dummy field which is humanly readable, and when that's clicked, the calendar will appear and you will be able to choose a date, which will then be set in the human-readable field and in the real form field. Don't forget to remove that temporary code from /ww.admin/pages/pages.js. Because this is a very useful feature, which we will use throughout the admin area whenever a date is needed, we will add a global JavaScript file which will run on all pages. Edit /ww.admin/header.php and add the following highlighted line: <script src="/j/jquery.remoteselectoptions /jquery.remoteselectoptions.js"></script><script src="/ww.admin/j/admin.js"></script><link rel="stylesheet" href="http://ajax.googleapis.com /ajax/libs/jqueryui/1.8.0/themes/south-street /jquery-ui.css" type="text/css" /> And then we'll create the /ww.admin/j/ directory and a file named /ww.admin/j/admin.js: function convert_date_to_human_readable(){ var $this=$(this); var id='date-input-'+Math.random().toString() .replace(/\./,''); var dparts=$this.val().split(/-/); $this .datepicker({ dateFormat:'yy-mm-dd', modal:true, altField:'#'+id, altFormat:'DD, d MM, yy', onSelect:function(dateText,inst){ this.value=dateText; } }); var $wrapper=$this.wrap( '<div style="position:relative" />'); var $input=$('<input id="'+id+'" class="date-human-readable" value="'+date_m2h($this.val())+'" />'); $input.insertAfter($this); $this.css({ 'position':'absolute', 'opacity':0 }); $this .datepicker( 'setDate', new Date(dparts[0],dparts[1]-1,dparts[2]) );}$(function(){ $('input.date-human').each(convert_date_to_human_readable);}); This takes the computer-readable date input and creates a copy of it, but in a human-readable format. The original date input box is then made invisible and laid across the new one. When it is clicked, the date is updated on both of them, but only the human-readable one is shown. Much better. Easy for a human to read, and also usable by the server. Saving the page We created the form, and except for making the body textarea more user-friendly, it's just about finished. Let's do that now. When you click on the Insert Page Details button (or Update Page Details, if an ID was provided), the form data is posted to the server. We need to perform these actions before the page menu is displayed, so it is up-to-date. Edit /ww.admin/pages.php, and add the following highlighted lines before the load menu section: echo '<h1>Pages</h1>';// { perform any actionsif(isset($_REQUEST['action'])){ if($_REQUEST['action']=='Update Page Details' || $_REQUEST['action']=='Insert Page Details'){ require 'pages/action.edit.php'; } else if($_REQUEST['action']=='delete'){ 'pages/action.delete.php'; }}// }// { load menu If an action parameter is sent to the server, then the server will use this block to decide whether you want to edit or delete the page. We'll handle deletes later in the article. Notice that we are handling inserts and updates with the same file, action.edit.php—in the database, there is almost no difference between the two when using MySQL. So, let's create that file now. We'll do it a bit at a time, like how we did the form, as it's a bit long. Create /ww.admin/pages/action.edit.php with this code: <?phpfunction pages_setup_name($id,$pid){ $name=trim($_REQUEST['name']); if(dbOne('select id from pages where name="'.addslashes($name).'" and parent='.$pid.' and id!='.$id,'id')){ $i=2; while(dbOne('select id from pages where name="'.addslashes($name.$i).'" and parent='.$pid.' and id!='.$id,'id'))$i++; echo '<em>A page named "'.htmlspecialchars($name).'" already exists. Page name amended to "' .htmlspecialchars($name.$i).'".</em>'; $name=$name.$i; } return $name;} The first piece is a function which tests the submitted page name. If that name is the same as another page which has the same parent, then a number is added to the end and a message is shown explaining this. Here's an example, creating a page named "Home" in the top level (we already have a page named "Home"): Next we'll create a function for testing the inputted special variable. Add this to the same file: function pages_setup_specials($id=0){ $special=0; $specials=isset($_REQUEST['special']) ?$_REQUEST['special']:array(); foreach($specials as $a=>$b) $special+=pow(2,$a); $homes=dbOne("SELECT COUNT(id) AS ids FROM pages WHERE (special&1) AND id!=$id",'ids'); if($special&1){ // there can be only one homepage if($homes!=0){ dbQuery("UPDATE pages SET special=special-1 WHERE special&1"); } } else{ if($homes==0){ $special+=1; echo '<em>This page has been marked as the site\'s Home Page, because there must always be one.</em>'; } } return $special;} In this function, we build up the special variable, which is a bit field. A bit field is a number which uses binary math to combine a few "yes/no" answers into one single value. It's good for saving space and fields in the database.Each value has a value assigned to it which is a power of two. The interesting thing to note about powers of two is that in binary, they're always represented as a 1 with some 0s after it. For example, 1 is represented as 00000001, 2 is 00000010, 4 is 00000100, and so on.When you have a bit field such as 00000011 (each number here is a bit), it's easy to see that this is composed of the values 1 and 2 combined, which are 20 and 21 respectively.The & operator lets us check quickly if a certain bit is turned on (is 1) or not. For example, 00010011 & 16 is true, and 00010011 & 32 is false, because the 16 bit is on and the 32 bit is off. In the database, we set a bit for the homepage, which we say has a value of 1. In the previous function, we need to make sure that after inserting or updating a page, there is always exactly one homepage. The only other one we've set so far is "does not appear in navigation menu", which we've given the value 2. If we added a third bit flag ("is a 404 handler", for example), it would have the value 4, then 8, and so on. Okay—now we will set up our variables. Add this to the same file: // { set up common variables$id =(int)$_REQUEST['id'];$pid =(int)$_REQUEST['parent'];$keywords =$_REQUEST['keywords'];$description =$_REQUEST['description'];$associated_date =$_REQUEST['associated_date'];$title =$_REQUEST['title'];$name =pages_setup_name($id,$pid);$body =$_REQUEST['body'];$special =pages_setup_specials($id);if(isset($_REQUEST['page_vars'])) $vars =json_encode($_REQUEST['page_vars']);else $vars='[]';// } Then we will add the main body of the page update SQL to the same file: // { create SQL$q='edate=now(),type="'.addslashes($_REQUEST['type']).'", associated_date="'.addslashes($associated_date).'", keywords="'.addslashes($keywords).'", description="'.addslashes($description).'", name="'.addslashes($name).'", title="'.addslashes($title).'", body="'.addslashes($body).'",parent='.$pid.', special='.$special.',vars="'.addslashes($vars).'"';// } This is SQL which is common to both creating and updating a page. Finally we run the actual query and perform the action. Add this to the same file: // { run the queryif($_REQUEST['action']=='Update Page Details'){ $q="update pages set $q where id=$id"; dbQuery($q);}else{ $q="insert into pages set cdate=now(),$q"; dbQuery($q); $_REQUEST['id']=dbLastInsertId();}// }echo '<em>Page Saved</em>'; In the first case, we simply run an update. In the second, we run an insert, adding the creation date to the query, and then setting $_REQUEST['id'] to the ID of the entry that we just created. Creating new top-level pages If you've been trying all this, you'll have noticed that you can create a top-level page simply by clicking on the admin area's Pages link in the top menu, and then you're shown an empty Insert Page Details form. If you've been trying all this, you'll have noticed that you can create a top-level page simply by clicking on the admin area's Pages link in the top menu, and then you're shown an empty Insert Page Details form. It makes sense, though, to also have it available from the pagelist on the left-hand side. So, let's make that add main page button useful. If you remember, we created a pages_add_main_page function in the menu.js file, just as a placeholder until we got everything else done. Open up that file now, /ww.admin/pages/menu.js, and replace that function with the following two new functions: function pages_add_main_page(){ pages_new(0);}function pages_new(p){ $('<form id="newpage_dialog" action="/ww.admin/pages.php" method="post"> <input type="hidden" name="action" value="Insert Page Details" /> <input type="hidden" name="special[1]" value="1" /> <input type="hidden" name="parent" value="'+p+'" /> <table> <tr><th>Name</th><td><input name="name" /></td></tr> <tr><th>Page Type</th><td><select name="type"> <option value="0">normal</option> </select></td></tr> <tr><th>Associated Date</th><td> <input name="associated_date" class="date-human" id="newpage_date" /></td></tr> </table> </form>') .dialog({ modal:true, buttons:{ 'Create Page': function() { $('#newpage_dialog').submit(); }, 'Cancel': function() { $(this).dialog('destroy'); $(this).remove(); } } }); $('#newpage_date').each(convert_date_to_human_readable); return false;} When the add main page button is clicked, a dialog box is created asking some basic information about the page to create: We include a few hidden inputs. action: To tell the server this is an Insert Page Details action. special: When Create Page is clicked, the page will be saved in the database, but we should hide it initially so that front-end readers don't see a half-finished page. So, the special[1] flag is set (21 == 2, which is the value for hiding a page). parent: Note that this is a variable. We can use the same dialog to create sub-pages. When the dialog has been created, the date input box is converted to human-readable. Creating new sub-pages We will add sub-pages by using context menus on the page list. Note that we have a message saying right-click for options under the list. First, add this function to the /ww.admin/pages/menu.js file: function pages_add_subpage(node,tree){ var p=node[0].id.replace(/.*_/,''); pages_new(p);} We will now need to activate the context menu. This is done by adding a contextmenu plugin to the jstree plugin . Luckily, it comes with the download, so you've already installed it. Add it to the page by editing /ww.admin/pages/menu.php and add this highlighted line: <script src="/j/jquery.jstree/jquery.tree.js"></script><script src= "/j/jquery.jstree/plugins/jquery.tree.contextmenu.js"></script><script src="/ww.admin/pages/menu.js"></script> And now, we edit the .tree() call in /ww.admin/menu.js to tell it what to do: $('#pages-wrapper').tree({ callback:{// SKIPPED FOR BREVITY - DO NOT DELETE THESE LINES }, plugins:{ 'contextmenu':{ 'items':{ 'create' : { 'label' : "Create Page", 'icon' : "create", 'visible' : function (NODE, TREE_OBJ) { if(NODE.length != 1) return 0; return TREE_OBJ.check("creatable", NODE); }, 'action':pages_add_subpage, 'separator_after' : true }, 'rename':false, 'remove':false } } }}); By default, the contextmenu has three links: create, rename, and remove. You need to turn off any you're not currently using by setting them to false. Now if you right-click on any page name in the pagelist, you will have a choice to create a sub-page under it. Deleting pages We will add deletions in the same way, using the context menu. Edit the same file, and this time in the contextmenu code, replace the remove: false line with these: 'remove' : { 'label' : "Delete Page", 'icon' : "remove", 'visible' : function (NODE, TREE_OBJ) { if(NODE.length != 1) return 0; return TREE_OBJ.check("deletable", NODE); }, 'action':pages_delete, 'separator_after' : true} And add the pages_delete function to the same file: function pages_delete(node,tree){ if(!confirm( "Are you sure you want to delete this page?"))return; $.getJSON('/ww.admin/pages/delete.php?id=' +node[0].id.replace(/.*_/,''),function(){ document.location=document.location.toString(); }); } One thing to always keep in mind is whenever creating any code that deletes something in your CMS, you must ask the administrator if he/she is sure, just to make sure it wasn't an accidental click. If the administrator confirms that the click was intentional, then it's not your fault if something important was deleted. So, the pages_delete function first checks for this, and then calls the server to remove the file. The page is then refreshed because this may significantly change the page list tree, as we'll see now. Create the /ww.admin/pages/delete.php file: <?phprequire '../admin_libs.php';$id=(int)$_REQUEST['id'];if(!$id)exit;$r=dbRow("SELECT COUNT(id) AS pagecount FROM pages");if($r['pagecount']<2){ die('cannot delete - there must always be one page');}else{ $pid=dbOne("select parent from pages where id=$id",'parent'); dbQuery("delete from pages where id=$id"); dbQuery("update pages set parent=$pid where parent=$id");}echo 1; First, we ensure that there is always at least one page in the database. If deleting this page would empty the table, then we refuse to do it. There is no need to add an alert explaining this, as it should be clear to anyone that deleting the last remaining page in a website leaves the website with no content at all. Simply refusing the deletion should be enough. Next, we delete the page. Finally, any pages which were contained within that page (pages which had this one as their parent), are moved up in the tree so they are contained in the deleted page's old parent. For example, if you had a page, page1>page2>page3 (where > indicates the hierarchy), and you removed page2, then page3 would then be in the position page1>page3. This can cause a large difference in the treestructure if there were quite a few pages contained under the deleted one, so the page needs to be refreshed.
Read more
  • 0
  • 0
  • 4635

article-image-oracle-webcenter-11g-portlets
Packt
21 Sep 2010
6 min read
Save for later

Oracle WebCenter 11g: Portlets

Packt
21 Sep 2010
6 min read
(For more resources on Oracle, see here.) Portlets, JSR-168 specification Specification JSR-168, which defines the Java technologies, gives us a precise definition of Java portlets: Portlets are web components—like Servlets—specifically designed to be aggregated in the context of a composite page. Usually, many Portlets are invoked to in the single request of a Portal page. Each Portlet produces a fragment of markup that is combined with the markup of other Portlets, all within the Portal page markup. You can see more detail of this specification on the following page: http://jcp.org/en/jsr/detail?id=168 While the definition makes a comparison with servlets, it is important to note that the portlets cannot be accessed directly through a URL; instead, it is necessary to use a page-like container of portlets. Consequently, we might consider portlets as tiny web applications that return dynamic content (HTML, WML) into a region of a Portal page. Graphically, we could view a page with portlets as follows: Additionally, we must emphasize that the portlets are not isolated from the rest of the components in the pages, but can also share information and respond to events that occur in other components or portlets. WSRP specification The WSRP specification allows exposing portlets as Web services. For this purpose, clients access portlets through an interface (*. wsdl) and get graphic content associated. Optionally, the portlet might be able to interact directly with the user through events ocurring on them. This way of invoking offers the following advantages: The portals that share a portlet centralize their support in a single point. The portlet integration with the portal is simple and requires no programming. The use of portlets, hosted on different sites, helps to reduce the load on servers. WebCenter portlets Portlets can be built in different ways, and the applications developed with Oracle WebCenter can consume any of these types of portlets. JSF Portlets: This type of portlet is based on a JSF application, which is used to create a portlet using a JSF Portlet Bridge. Web Clipping: Using this tool, we can build portlets declaratively using only a browser. These portlets show content from other sites. OmniPortlet: These portlets can retrieve information from different types of data sources (XML, CSV, database, and so on) to expose different ways of presenting things, such as tables, forms, charts, and so on. Content Presenter: This allows you to drop content from UCM on the page and display this content in any way you like or using a template. Ensemble: This is a way to "mashup" or produce portlets or "pagelets" of information that can be displayed on the page. Programmatic Portlets: Obviously, in addition to the previous technologies that facilitate the construction of portlets, it is also possible to build in a programmatic way. When we build in this way, we reach a high degree of personalization and control. However, we need specialized Java knowledge in order to program in this way. As we can see, there are several ways in which we can build a portlet; however, in order to use the rich components that the ADF Faces framework offers, we will focus on JSF Portlets. Developing a portlet using ADF The portlet that we will build will have a chart, which shows the status of the company's requests. To do this, we must create a model layer that represents our business logic and exposes this information in a page. Therefore, we are going to do the following steps: Create an ADF application. Develop business components. Create a chart page. Generate a portlet using the page. Deploy the portlet. In this example, we use a page for the construction of a portlet; however, ADF also offers the ability to create portlets based on a flow of pages through the use of ADF TaskFlows. You can find more information on the following link: http://download.oracle.com/docs/cd/E15523_01/web.1111/b31974/taskflows.htm#BABDJEDD Creating an ADF application To create the application, do the following steps: Go to JDeveloper. In the menu, choose the option File | New to start the wizard for creating applications. In the window displayed, choose the Application category and choose the Fusion Web Application ADF option and press the OK button. Next, enter the following properties for creating the application: Name: ParacasPortlet Directory: c:ParacasPortlet Application Package Prefix : com.paracasportlet Click Finish to create the application. Developing business components Before starting this activity, make sure you have created a connection to the database. In the project Palette, right-click on Project Model, and choose New. On the next page, select the category Business Tier | ADF Business Components and choose Business Components from Tables. Next, press the OK button. In the following page, you configure the connection to the database. At this point, select the connection db_paracas and press the OK button. In order to build a page with a chart, we need to create a read-only view. For this reason, don't change anything, just press the Next button. (Move the mouse over the image to enlarge.) In this next step, we can create updateable views. But, we don't need this type of component. So, don't change anything. Click the Next button. Now, we need to allow the creation of read-only views. We will use this kind of component in our page; therefore select the table REQUEST, as shown next and press Next. Our next step will allow the creation of an application module. This component is necessary to display the read-only view in the whole application. Keep this screen with the suggested values and click the Finish button. Check the Application Navigator. You must have your components arranged in the same way as shown in the following screenshot: Our query must determine the number of requests for status. Therefore, it will be necessary to make some changes in the created component. To start, double-click on the view RequestView, select the Query category, and click on the Edit SQL Query option as shown in the following screenshot: In the window shown, modify the SQL as shown next and click the OK button. SELECT Request.STATUS, COUNT(*) COUNT_STATUSFROM REQUEST RequestGROUP BY Request.STATUS We only use the attributes Status and CountStatus. For this reason, choose the Attributes category, select the attributes that are not used, and press Delete selected attribute(s) as shown in the following screenshot: Save all changes and verify that the view is similar to that shown next:
Read more
  • 0
  • 0
  • 4617
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-shipping-taxes-and-processing-orders-wordpress-29-ecommerce
Packt
18 Mar 2010
4 min read
Save for later

Shipping, Taxes, and Processing Orders with WordPress 2.9 eCommerce

Packt
18 Mar 2010
4 min read
  Locations and tax setup There's no doubt that tax law is complicated, but it's a necessity. It's far beyond the scope to discuss tax law in all countries, so be sure to do your own research regarding taxes for your own state, country, and region. For businesses located in the USA, here are a couple of links to get you started: Amazon Sales Tax Info – http://bit.ly/amazon-sales-tax State Tax Info – http://www.business.gov/finance/taxes/state.html Within the USA, most businesses charge state sales tax based on the shipping information that the customer provides. While nine times out of ten a customer's billing and shipping information are the same, the times when the billing and shipping information differ can lead to some interesting scenarios. For example, if your store is based in Kansas, a customer in Kansas who buys a product and ships it to Texas would not have to pay tax on that order at least not to the state of Kansas. On the other hand, a customer in Georgia who buys your product and ships it to Kansas would have to pay tax on the order. Again, tax law is undoubtedly complicated, so be sure to look up any laws specific to your own state. On to business—the first thing to do is to verify that your Base Country/Region is correct. You can find this under the General tab of the WP e-Commerce Plugin Settings, as seen in the following screenshot: Next, verify that the tax percentages are correct for your state or region. This is just underneath the settings for Base Country/Region. Finally, switch to the Checkout tab at the top to consider one more setting. Just, under Checkout Options there is an option to Lock Tax to Billing Country. This is shown in the following screenshot: What this option does, once selected, is lock the billing country to the shipping country. For stores in the USA, it also has the effect of locking the billing state to the shipping state. This certainly simplifies matters, but it doesn't quite solve the tax scenarios outlined earlier in this section. Shipping Options and Calculators Configuring a store's shipping options is potentially one of the most complicated tasks that new store owners face, but it doesn't have to be. Much of it depends on the type of products that you are planning to sell. The absolute simplest scenario involves selling digital downloads only, in which case you don't need to worry about shipping at all. When dealing with tangible goods, one's shipping needs grow increasingly complicated depending on the diversity of the products involved and anticipated location of the customers. For instance, selling books as well as clothing will create different shipping needs than selling only books or clothing. Also, planning to sell to customers worldwide will necessitate more complicated shipping needs than limiting one's customer base to only one or two countries. Unlike creating and modifying a product catalog, configuring shipping settings is a task that only needs to be done once. General Shipping Settings To view and modify your shipping settings, switch to the Shipping tab at the top of the Settings page, as seen in the following screenshot: Under the General Settings, you have the option to globally enable or disable shipping. If your shop is comprised of digital downloads only, then you can safely switch the Use Shipping option to No and rejoice! You no longer have to worry about any shipping confi guration. The following screenshot shows the relevant panel: Those, who do sell tangible items should leave it set to Yes, as shown above. If your store is located in the USA, add the Zipcode for the area from where you will be shipping items. This is really only necessary if you plan to use one of the external shipping calculators (UPS or USPS), but it doesn't hurt to add it anyway. If you subscribe to the third-party Shipwire order fulfillment service (www.shipwire.com), set the Shipwire option to Yes and enter your relevant login information. Shipwire is a service that collects and stores your products for you, shipping them to customers when necessary. While convenient, Shipwire comes with a cost, currently starting at $30 per month. One neat aspect of the e-Commerce plugin is that you can opt to allow for free shipping, provided that the order price is above a certain threshold. At the bottom of the general shipping options is a toggle to Enable Free Shipping Discount, as seen in the following screenshot: Enter a threshold amount of your choice, such as $50, like seen above. Any orders that customers place with a value equal to or higher than that price will automatically qualify for free shipping.
Read more
  • 0
  • 0
  • 4613

article-image-organizing-your-content-effectively-using-joomla-15
Packt
23 Mar 2010
8 min read
Save for later

Organizing your Content Effectively using Joomla 1.5

Packt
23 Mar 2010
8 min read
Building on the example site It's time to make room for growth. Your client has a big pile of information on ugly art that they want to present to the public. You are asked to design a site framework that makes it easy to add more content, while at the same time keeps it easy for visitors to quickly find their way through the site. Can you do that? You most certainly can! Joomla! allows you to build sites of all sorts and sizes, whether they consist of just a few pages or thousands of them. If you plan ahead and start with a sound basic structure, you'll be rewarded with a site that's easy to maintain and extend. In this article, we'll review the site you've just built and look at the different ways the content can be structured—and rearranged, if need be. Grouping content: A crash course in site organization To lay the groundwork for your site, you won't use Joomla!. The back of a napkin will do fine. Draw up a site map to lay out the primary content chunks and their relationships. View your site from a user's perspective. What do you think your visitors will primarily look for, and how can you help them find things fast and easily? Designing a site map To create a site map, first collect all information you plan on having on your website and organize it into a simple and logical format. As site maps come, this is a very basic one. For the most part, it's just one level deep. Introducing Ugly Paintings and Mission are basic web pages (articles). Activities is a section that allows the visitor to browse two other categories. Contact Us is a contact form page. This structure was good enough for a basic website, but it won't do if SRUP wants to expand their site. Time for action – create a future proof site map Let's make some room for growth. Imagine your client's planning to add an indefinite amount of new content, so there's a need for additional content containers. They have come up with the following list of subjects they want to add to their site: News items A few pages to introduce the founding members of SRUP Reviews on ugly art Facts on ugly paintings (history, little known facts, and so on) What's the best way to organize things? Let's figure out which content fits which type of container. Step 1: You'll probably want to create a separate News section. News should be a top level item, a part of the site's main menu.     Step 2: The information on the SRUP founders fits in a new section 'About SRUP'.   Step 3: Both Reviews and Facts can be categories in a new general section on 'Ugly Paintings'. The existing article 'Introducing Ugly Paintings' could be moved here (or dropped).     What just happened? You've laid a solid foundation for your site—on paper. Before you actually start using Joomla! to create sections and categories, create a structure for the content that you have in mind. Basically, no matter how big or small your website is, you'll organize it just like the example you've just seen. You'll work from top to bottom, from the primary level to the lower levels, defining content groups and their relations. Bear in mind, though, that there will certainly be more than one way to organize your information. Choose an organization that makes sense to you and your visitors, and try to keep things lean and clean. A complex structure will make it harder to maintain the content, and eventually—when building menus—it will make it harder to design clear and simple navigation paths for your visitors. Tips on choosing sections It can be useful to choose sections based on the main intentions people have when they come to the site. What are they here for? Is it to Browse Products or to Join a Workshop? Common choices for sections are: Products, Catalog, Company, Portfolio, About Us, Jobs, News, and Downloads. Try not to have more than five to seven sections. Once you have more than that, readers won't be able to hold them all in their heads at once when they have to choose which one to browse. Transferring your site map to Joomla! Let's have a closer look at our new site map and identify the Joomla! elements. This—and any—Joomla! site is likely to consist of five types of content. The following are the content types in our SRUP site map:   Obviously, the top level item will be the home page. The main content groups we can identify as sections and categories. This small site has four sections, three of which contain two categories. Each of the categories hold actual content: this is what will end up in Joomla! as articles. In this site map, there is one article that doesn't really belong in any category: the Mission Statement page. Every site will have one or two of those independent articles. In Joomla!, you can add these as uncategorized articles. Finally, there's one item that represents a very different type of content. In the site map above, a grey background indicates an item containing special functionality. In this case this is a contact form. Other examples are guest books, order forms, photo galleries. Basically, that's all there is to a Joomla! site. When you've got your site outlined like this, you won't meet any surprises while building it. You can transform any amount of content and functionality into a website, step by step.   How do you turn a site map into a website? If you've got your site blueprint laid out, you probably want to start building! Now, what should be the first step? What's the best, and fastest, way to get from that site map on the back of your napkin to a real-life Joomla! site? In this book, we'll work in this order: Organize: Create content containers.You've seen that much of the site map we just created consists of content containers: sections and categories. In this article, we'll focus on these containers. We'll create all necessary containers for our example site. Add content: Fill the containers with articles.Next, we'll add articles to the sections and categories. Articles are the "classic content" that most web pages are made of. We should also check for articles that do not belong in any category. Instead of assigning them to a section and a category, we'll add them as Uncategorized content.. Put your contents on display: Create the home page and content overview pages.Next, you'll want to guide and invite visitors. You can achieve this using two special types of pages in the site map, the home page and Joomla!'s section/category overview pages ("secondary home pages"). Make everything findable: Create menus.The top level items in your site map will probably end up as menu items on the site. To open up your site to the world you'll create and customize menus helping visitors to easily navigate your content. And what about the special content stuff? You'll notice that in the above list we've summed up all sorts of "classic content", such as articles, home pages, overview pages, and menus linking it all. We haven't yet mentioned one essential part of the site map, the special goodies. On a dynamic website you can have more than just plain old articles. You can add picture galleries, forms, product catalogues, site maps, and much, much more. It's important to identify those special pages from the beginning, but you'll add them later using Joomla!'s components and extensions. That's why we'll first concentrate on building a rock-solid foundation; later we'll add all of the desired extras. Let's start with step one now, and get our site organized! Creating content containers: Sections and categories To create a section, navigate to Content | Section Manager | New. To make a new category, you'll use the Category Manager instead. Just add a title for your new section or category and click on Save. You've created a perfectly workable section or category with the default settings (or parameters as Joomla! likes to call them). Time for action – create a new section and a category Your client was happy with the initial site structure you designed, but now their website is evolving, there's a need for more content containers. Let's add a news section first: Navigate to Content | Section Manager and click on New. In the Section: [New] screen, fill out the Title field. In this example, type News: Leave the other values unchanged; click on Save. You're taken to the Section Manager. The News section is now shown in the section list. Now, add a category to the new section. We'll call this category General News: Navigate to Content | Category Manager and click on New. In the Title field, type General News. In the Section field, select News. Click on Save. You're done!
Read more
  • 0
  • 0
  • 4610

article-image-setting-online-shopping-cart-drupal-and-ubercart
Packt
19 Dec 2011
7 min read
Save for later

Setting up an online shopping cart with Drupal and Ubercart

Packt
19 Dec 2011
7 min read
(For more resources on Drupal, see here.) Setting up the e-commerce system Ubercart has existed since Drupal 6 so those users who have used Ubercart with Drupal 6 will have some experience setting up and configuring this module for Drupal 7. In this section we'll install and set up the basic Ubercart configuration. Goal Set up the system that will allow visitors to purchase take-out from the website. Additional modules needed Ubercart, Token, Views, CTools, Entity, Token, Rules (http://drupal.org/project/ubercart). Configuring Ubercart In order to utilize an e-commerce solution within a website, you must either build a solution from scratch or use a ready-made solution. Because building a professional e-commerce solution from scratch would take an entire book and weeks or months to build, we are fortunate to have Ubercart, which is a professional quality e-commerce solution tailor-made for Drupal. To begin using Ubercart: Download and install the module from its project page. The current stable release for Drupal 7 is 7.x-3.0-rc3. To use Ubercart, you will also need to install some dependency modules in order to get the best functionality and performance from Ubercart. These dependencies are listed on the Ubercart Website: Token, Views, Rules, CTools, Entity API, Entity Token and Token. Go ahead and install and enable the dependency modules and Ubercart. Once installed, we will begin by enabling the following Ubercart modules: All of the modules in the Ubercart—Core section including Cart, Order, Product, and Store. Then enable the following Core (Optional) modules: Catalog, File downloads, Payment, Product Attributes, Reports, Roles, and Taxes. Then in the Extra section enable the following: Product Kit and Stock. For our Payment method and module enable the Credit card, PayPal, and Test Gateway modules. We will not be shipping any items so we do not need to enable shipping modules at this point. There are a variety of other modules that can be optionally enabled. We will explore some of these in future topics. If you are planning to use the PayPal Express Checkout payment method with Ubercart you should make sure you have enabled the CURL PHP extension for use in your PHP configuration. To do this you can ask your hosting company to enable the CURL extension in your php.ini file. Otherwise if you have access to the php.ini file you can open the file and uncomment the CURL extension to enable it. Then save your php.ini file and restart the Web server. When you check your PHP info via the Drupal status report you should see the following section showing you that the PHP CURL extension is enabled.     Now, you should see the following Ubercart modules that you have enabled on your main modules configuration page:   Ubercart installs a new menu item called Store next to the menu item called Dashboard in the Administer menu bar, which includes the options shown in the following screenshot:   The first thing you'll notice is a warning message telling you that one or more problems exist with your Drupal install. Click on the Status report link. On the Status report page you'll see a warning telling you to enable credit card encryption. In order to use Ubercart for payment processing for your online store transactions you first need to set up an encryption location for your credit card data.   To do this click on the credit card security settings link. That will load the Credit card settings screen. Under Basic Settings, for now let's enable the Test Gateway. Later we'll discuss integrating PayPal but for testing purposes we can use the Test Gateway configuration.   Click on the Security Settings tab. Here we need to set up our card number encryption settings. We can first create a folder outside of our document root (outside of our Drupal directory) but on our web server to hold our encrypted data. We'll create a folder at the root level of our MAMP install called "keys". Go ahead and do this then type in /Applications/MAMP/keys in your Card number encryption key file path field. Also make sure the folder allows for write permissions.   Now click on the PayPal Website Payments Pro tab and disable the gateway for use. Click on the Test Gateway tab and leave this enabled for use. Click on Credit card settings and leave the defaults selected. Under Customer messages you can specify any custom messages you want to show the user if their credit card fails during the transaction process. Click Save configuration. When you save your configuration you should get the following success message: Credit card encryption key file generated. Card data will now be encrypted. The configuration options have been saved. Now, we have successfully set up our payment processing with the Test gateway enabled. We're ready to start using Ubercart. To begin using Ubercart, we will need to: Edit the Configuration settings, which are shown in the following screenshot: Clicking on the Store link allows you to configure several options that control how information on the site is displayed, as well as specify some basic contact information for the site To edit the settings for a section, click on the tab for that section. On the Store settings page, we simply need to update the Store name, Store owner, e-mail address, phone and fax numbers for the store. You can also enter store address information. If your store is not located in the United States, you will also need to modify the Currency Format settings. If you are shipping items you may need to tweak the Weight and Length formats. You can also specify a store help page where you'll provide help information. You can simply create a new page in Drupal and alias it, and then type the alias in here. Once you finish entering your Store settings click the Save configuration button Configuring Ubercart user permissions You should also create a role for store administration, by clicking on User management and then Roles. We will call our role Store Administrator. Click Add role to finish building the role. After the role has been added, click the edit permissions link for the role to define the permissions for the new role. Ubercart includes several sections of permissions including: Catalog Credit Card File Downloads Store Payment Product Product attributes Product kit Order Taxes Since this is an administrative role you may just want to give this staff person all permissions for an admin level role on the site. Go ahead and check off the permissions you need to grant and then save your role's permissions.If your site has employees, you may also want to create a separate role (or several roles) that do not give full control over the store for your employees. For example, you may not want an employee to be able to create or delete products, but you would like them to be able to take new orders. Ubercart provided blocks The Ubercart module ships with a core block that you can enable to make it easier for visitors to your site to see what they have ordered and to check out. Go to your blocks administrative page and you can enable the Shopping cart block to show in one of your block regions. I'll put it in the Sidebar second region. These are the basic configuration options which need to be done after you enable the block. Click on the configure link and then you can title your block, select whether to display the shopping cart icon, make the cart block collapsible, add cart help text ,and more. I'll leave the defaults selected for now. You should see a block admin screen similar to the following:  
Read more
  • 0
  • 0
  • 4606

article-image-deploy-toshi-bitcoin-node-docker-aws
Alex Leishman
05 Aug 2015
8 min read
Save for later

Deploy Toshi Bitcoin Node with Docker on AWS

Alex Leishman
05 Aug 2015
8 min read
Toshi is an implementation of the Bitcoin protocol, written in Ruby and built by Coinbase in response to their fast growth and need to build Bitcoin infrastructure at scale. This post will cover: How to deploy Toshi to an Amazon AWS instance with Redis and PostgreSQL using Docker. How to query the data to gain insights into the Blockchain To get the most out of this post you will need some basic familiarity with Linux, SQL and AWS. Most Bitcoin nodes run “Bitcoin Core”, which is written in C++ and serves as the de-facto standard implementation of the Bitcoin protocol. Its advantages are that it is fast for light-medium use and efficiently stores the transaction history of the network (the blockchain) in LevelDB, a key-value datastore developed at Google. It has wallet management features and an easy-to-use JSON RPC interface for communicating with other applications. However, Bitcoin Core has some shortcomings that make it difficult to use for wallet/address management in at-scale applications. Its database, although efficient, makes it impossible or very difficult to perform certain queries on the blockchain. For example, if you wanted to get the balance of any bitcoin address, you would have to write a script to parse the blockchain separately to find the answer. Additionally, Bitcoin Core starts to significantly slow down when it has to manage and monitor large amounts of addresses (> ~10^7). For a web app with hundreds of thousands of users, each regularly generating new addresses, Bitcoin Core is not ideal. Toshi attempts to address the flexibility and scalability issues facing Bitcoin Core by parsing and storing the entire blockchain in an easily-queried PostgreSQL database. Here is a list of tables in Toshi’s DB: schema.txt We will see the direct benefit of this structure when we start querying our data to gain insights from the blockchain. Since Toshi is written in Ruby it has the added advantage of being developer friendly and easy to customize. The main downside of Toshi is the need for ~10x more storage than Bitcoin core, as storing and indexing the blockchain in well-indexed relational DB requires significantly more disk space. First we will create an instance on Amazon AWS. You will need at least 300GB of storage for the Postgres database. Be sure to auto assign a public IP and allow TLS incoming connections on Port 5000, as this is how we will access the Toshi web interface. Once you get your instance up and running, SSH into the instance using the commands given by Amazon. First we will set up a user for Toshi: ubuntu@ip-172-31-62-77:~$ sudo adduser toshi Adding user `toshi' ... Adding new group `toshi' (1001) ... Adding new user `toshi' (1001) with group `toshi' ... Creating home directory `/home/toshi' ... Copying files from `/etc/skel' ... Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully Changing the user information for toshi Enter the new value, or press ENTER for the default Full Name []: Room Number []: Work Phone []: Home Phone []: Other []: Is the information correct? [Y/n] Y Then we will add the new user to the sudoers group and switch to that user: ubuntu@ip-172-31-62-77:~$ sudo adduser toshi sudo Adding user `toshi' to group `sudo' ... Adding user toshi to group sudo Done. ubuntu@ip-172-31-62-77:~$ su – toshi toshi@ip-172-31-62-77:~$ Next, we will install Docker and all of its dependencies through an automated script available on the Docker website. This will provision our instance with the necessary software packages. toshi@ip-172-31-62-77:~$ curl -sSL https://get.docker.com/ubuntu/ | sudo sh Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --homedir ..... Then we will clone the Toshi repo from Github and move into the new directory: toshi@ip-172-31-62-77:~$ git clone https://github.com/coinbase/toshi.gittoshi@ip-172-31-62-77:~$ cd toshi/ Next, build the coinbase/toshi Docker image from the Dockerfile located in the /toshi directory. Don’t forget the dot at the end of the command!! toshi@ip-172-31-62-77:~/toshi$ sudo docker build -t=coinbase/toshi .Sending build context to Docker daemon 13.03 MB Sending build context to Docker daemon … … … Removing intermediate container c15dd6c961c2 Step 3 : ADD Gemfile /toshi/Gemfile INFO[0120] Error getting container dbc7c41625c49d99646e32c430b00f5d15ef867b26c7ca68ebda6aedebf3f465 from driver devicemapper: Error mounting '/dev/mapper/docker-202:1-524950-dbc7c41625c49d99646e32c430b00f5d15ef867b26c7ca68ebda6aedebf3f465' on '/var/lib/docker/devicemapper/mnt/dbc7c41625c49d99646e32c430b00f5d15ef867b26c7ca68ebda6aedebf3f465': no such file or directory Note, you might see ‘Error getting container’ when this runs. If so don’t worry about it at this point. Next, we will build and run our Redis and Postgres containers. toshi@ip-172-31-62-77:~/toshi$ sudo docker run --name toshi_db -d postgres toshi@ip-172-31-62-77:~/toshi$ sudo docker run --name toshi_redis -d redis This will build and run Docker containers named toshi_db and toshi_redis based on standard postgres and redis images pulled from Dockerhub. The ‘-d’ flag indicates that the container will run in the background (daemonized). If you see ‘Error response from daemon: Cannot start container’ error while running either of these commands, simply run ‘sudo docker start toshi_redis [or toshi_postgres]’ again. To ensure that our containers are running properly, run: toshi@ip-172-31-62-77:~$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4de43ccc8e80 redis:latest "/entrypoint.sh redi 7 minutes ago Up 3 minutes 6379/tcp toshi_redis 6de0418d4e91 postgres:latest "/docker-entrypoint. 8 minutes ago Up 2 minutes 5432/tcp toshi_db You should see both containers running, along with their port numbers. When we run our Toshi container we need to tell it where to find the Postgres and Redis containers, so we must find the toshi_db and toshi_redis IP addresses. Remember we have not run a Toshi container yet, we only built the image from the Dockerfile. You can think of a container as a running version of an image. To learn more about Docker see the docs. toshi@ip-172-31-62-77:~$ sudo docker inspect toshi_db | grep IPAddress "IPAddress": "172.17.0.3", toshi@ip-172-31-62-77:~$ sudo docker inspect toshi_redis | grep IPAddress "IPAddress": "172.17.0.2", Now we have everything we need to get our Toshi container up and running. To do this run: sudo docker run --name toshi_main -d -p 5000:5000 -e REDIS_URL=redis://172.17.0.2:6379 -e DATABASE_URL=postgres://postgres:@172.17.0.3:5432 -e TOSHI_ENV=production coinbase/toshi sh -c 'bundle exec rake db:create db:migrate; foreman start' Be sure to replace the IP addresses in the above command with your own. This creates a container named ‘toshi_main’, runs it as a daemon (-d) and sets three environment variables in the container (-e) which are required for Toshi to run. It also maps port 5000 inside the container to port 5000 of our host (-p). Lastly it runs a shell script in the container (sh –c) which creates and migrates the database, then starts the Toshi web server. To see that it has started properly run: toshi@ip-172-31-62-77:~$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 017c14cbf432 coinbase/toshi:latest "sh -c 'bundle exec 6 seconds ago Up 5 seconds 0.0.0.0:5000->5000/tcp toshi_main 4de43ccc8e80 redis:latest "/entrypoint.sh redi 43 minutes ago Up 38 minutes 6379/tcp toshi_redis 6de0418d4e91 postgres:latest "/docker-entrypoint. 43 minutes ago Up 38 minutes 5432/tcp toshi_db If you have set your AWS security settings properly, you should be able to see the syncing progress of Toshi in your browser. Find your instance’s public IP address from the AWS console and then point your browser there using port 5000. For example: ‘http://54.174.195.243:5000/’. You can also see the logs of our Toshi container by running: toshi@ip-172-31-62-77:~$ sudo docker logs –f toshi_main That’s it! We’re all up and running. Be prepared to wait a long time for the blockchain to finish syncing. This could take more than a week or two, but you can start playing around with the data right away through the GUI to get a sense of the power you now have. About the Author Alex Leishman is a software engineer who is passionate about Bitcoin and other digital currencies. He works at MaiCoin.com where he is helping to build the future of money.
Read more
  • 0
  • 0
  • 4602
article-image-joomla-flash-showing-maps-using-yos-ammap
Packt
18 Nov 2009
12 min read
Save for later

Joomla! with Flash: Showing maps using YOS amMap

Packt
18 Nov 2009
12 min read
Showing maps using YOS amMap Adding a map to your site may be a necessity in some cases. For example, you want to show the population of countries, or you want to show a world map to your students for teaching geography. Flash maps are always interesting as you can interact with them and can view them as you like. amMap provides tools for showing Flash maps. The amMap tool is ported as a Joomla! component by yOpensource, and the component is released with the name YOS amMap. This component has two versions—free and commercial. The commercial or pro version has some advanced features that are not available in the free version. The YOS amMap component, together with its module, allows you to display a map of the world, a region, or a country. You can choose the map to be displayed, which areas or countries are to be highlighted, and the way in which the viewers can control the map. Generally, maps displayed through the YOS amMap component can be zoomed, centered, or scrolled to left, right, top, or bottom. You can also specify a color in which a region or a country should be displayed. Installing and configuring YOS amMap To use YOS amMap with your Joomla! website, you must first download it from http://yopensource.com/en/component/remository/?func=fileinfo&id=3. After downloading and extracting the compressed package, you get the component and module packages. Install the component and module from the Extensions | Install/Uninstall screen. Once installed, you can administer the YOS amMap component from Components | YOS amMap. This shows the YOS amMap Control Panel, as shown in the following screenshot: YOS amMap Control Panel displays several icons through which you can configure and publish maps. The first thing you should do is to configure the global settings for amMap. In order to do this, click on the Parameters icon in the toolbar. Doing so brings up the dialog box, as shown in the following screenshot: In the Global Configuration section, you can enter a license key if you have purchased the commercial or the pro version of this component. For the free version, this is not needed. In this section, you can also configure the legal extensions of files that can be uploaded through this component, the maximum file size for uploads, the legal image extensions, and the allowed MIME types of all uploads. You can also specify whether the Flash uploader will be used or not. Once you have configured these fields, click on the Save button and return to YOS amMap Control Panel. Adding map files You can see the list of available maps by clicking on the Maps icon on the YOS amMap Control Panel screen or by clicking on Components | amMap | Maps. This shows the Maps Manager screen, as shown in the next screenshot. As you can see, the Maps Manager screen displays the list of available maps. By default, you find the world.swf, continents.swf, and world_with_antartica.swf map files. You will find some extra maps with the amMap bundle. You can also download the original amMap package from http://www.ammap.com/download. After downloading the ZIP package, extract it, and you will find many maps in the maps subfolder. Any map from this folder can be uploaded to the Joomla! site from the Maps Manager screen. Creating a map There are several steps for creating a map using YOS amMap. First we need to upload the package for the map. For example, if we want to display the map of the United States of America, then we need to upload the map template, the map data file, and the map settings file for the United States of America. To do this first upload the map template from the Maps Manager screen. You will find the map template for USA in the ammap/maps folder. Then we need to upload the data and the settings files. For doing so, click on the Upload link on the YOS amMap Control Panel screen. Then, in the Upload amMap screen, which is shown in the next screenshot, type the map's title (United States) in the Title field. Before clicking on the Browse button besides the Package File field, you first add the ammap_data.xml and the ammap_settings.xml files to a single ZIP file, unitedstates.zip. Now, click on the Browse button, and select this unitedstates.zip file. Then click on the Upload File & Install button. Once uploaded successfully, you see this map listed in the YOS amMap Manager screen, as shown in the next screenshot. You get this screen by clicking on the amMaps link on the toolbar. As you can see, the map that we have added is now listed in the YOS amMap Manager screen. However, the map is yet in an unpublished state, and we need to configure the map before publishing it. We need to configure its data and settings files, which are discussed in the following sections. Map data file The different regions of a map are identified by the map data file. This is an XML file and it defines the areas to be displayed on the map. The typical structure of a map data file can be understood by examining ammap_data.xml. The file has many comments that explain its structure. This file looks like as follows: <?xml version="1.0" encoding="UTF-8"?><map map_file="maps/world.swf" tl_long="-168.49" tl_lat="83.63" br_long="190.3" br_lat="-55.58" zoom_x="0%" zoom_y="0%" zoom="100%"><areas> <area title="AFGHANISTAN" mc_name="AF"></area> <area title="ALAND ISLANDS" mc_name="AX"></area> <area title="BANGLADESH" mc_name="BD"></area> <area title="BHUTAN" mc_name="BT"></area> <area title="CANADA" mc_name="CA"></area> <area title="UNITED ARAB EMIRATES" mc_name="AE"></area> <area title="UNITED KINGDOM" mc_name="GB"></area> <area title="UNITED STATES" mc_name="US"></area> <area title="borders" mc_name="borders" color="#FFFFFF" balloon="false"></area></areas><movies> <movie lat="51.3025" long="-0.0739" file="target" width="10" height="10" color="#CC0000" fixed_size="true" title="build-in movie usage example"></movie> <movie x="59.6667%" y="77.5%" file="icons/pin.swf" title="loaded movie usage example" text_box_width="250" text_box_height="140"> <description> <![CDATA[You can add description text here. This text will appear the user clicks on the movie. this description text can be html-formatted (for a list which html tags are supported, visit <u><a href="http://livedocs.adobe.com/flash/8/main/00001459.html">this page</a></u>. You can add descriptions to areas and labels too.]]> </description> </movie></movies><labels> <label x="0" y="50" width="100%" align="center" text_size="16" color="#FFFFFF"> <text><![CDATA[<b>World Map]]></text> <description><![CDATA[]]></description></label></labels><lines> <line long="-0.0739, -74" lat="51.3025, 40.43" arrow="end" width="1" alpha="40"></line> </lines></map> This code is a stripped-down version of the default ammap_data.xml file. Let us examine its structure and try to understand the meaning of each markup: <map> </map>: You define the map's structure using this markup. First, by using the map_file attribute, we declare the map file that should be used to display this map. This markup has some other attributes through which we declare the top and the left offset in longitude and latitude. We can also specify the zooming level using the zoom_x, zoom_y, and zoom attributes. <areas> </areas>: Areas are the regions or countries on a map. These are defined in the map. We only need to define the areas that we want to display. For example, in the sample, we have defined eight countries to be displayed and one straight line. Each area element has several attributes, among which you need to mention mc_name and title. You specify the area's name in mc_name, which is predefined in the map template. The title element will be displayed as the title of that map area. For example, <area mc_name="BD" title="Bangladesh"></area> means the areas marked as BD in the map template will be displayed with the title Bangladesh. In order to specify the mc_name element, you need to follow the map template designer's instructions. <movies> </movies>: Movies are some extra clips that can be displayed as a separate layer on the map. For example, to display the capital of each country, a movie clip could be displayed in the specified latitude and longitude. You can also display some other animations or text using a movie definition. <labels> </labels>: The <labels> markup contains the text to be displayed on the map. You can add any text on a map by defining a label element. To view and edit the map data file, ammap_data.xml, click on the map name on the YOS amMap Manager screen. This opens-up the amMap: [Edit] screen, as shown in the following screenshot: The amMap: [Edit] screen displays several configurations for the map. From the Details section you can change the map name, publish the map, and enable security. From the Design section you can view and edit the data and the settings files. Clicking on Data will show the data file. You can edit the data file from the online editor. As we want to display the map of USA, we will make the following changes on this screen: Select usa.swf in the Maps list. Change the data file as follows: <?xml version="1.0" encoding="UTF-8"?><map map_file="maps/usa.swf" zoom="100%" zoom_x="7.8%"zoom_y="0.18%"><areas> <area mc_name="AL" title="Alabama"/> <area mc_name="AK" title="Alaska"/> <area mc_name="AZ" title="Arizona"/> <area mc_name="AR" title="Arkansas"/> <area mc_name="CA" title="California"/> <area mc_name="CO" title="Colorado"/> <area mc_name="CT" title="Connecticut"/> <area mc_name="DE" title="Delaware"/> <area mc_name="DC" title="District of Columbia"/> <area mc_name="FL" title="Florida"/> <area mc_name="GA" title="Georgia"/> <area mc_name="HI" title="Hawaii"/> <area mc_name="ID" title="Idaho"/> <area mc_name="IL" title="Illinois"/> <area mc_name="IN" title="Indiana"/> <area mc_name="IA" title="Iowa"/> <area mc_name="KS" title="Kansas"/> <area mc_name="KY" title="Kentucky"/> <area mc_name="LA" title="Louisiana"/> <area mc_name="ME" title="Maine"/> <area mc_name="MD" title="Maryland"/> <area mc_name="MA" title="Massachusetts"/> <area mc_name="MI" title="Michigan"/> <area mc_name="MN" title="Minnesota"/> <area mc_name="MS" title="Mississippi"/> <area mc_name="MO" title="Missouri"/> <area mc_name="MT" title="Montana"/> <area mc_name="NE" title="Nebraska"/> <area mc_name="NV" title="Nevada"/> <area mc_name="NH" title="New Hampshire"/> <area mc_name="NJ" title="New Jersey"/> <area mc_name="NM" title="New Mexico"/> <area mc_name="NY" title="New York"/> <area mc_name="NC" title="North Carolina"/> <area mc_name="ND" title="North Dakota"/> <area mc_name="OH" title="Ohio"/> <area mc_name="OK" title="Oklahoma"/> <area mc_name="OR" title="Oregon"/> <area mc_name="PA" title="Pennsylvania"/> <area mc_name="RI" title="Rhode Island"/> <area mc_name="SC" title="South Carolina"/> <area mc_name="SD" title="South Dakota"/> <area mc_name="TN" title="Tennessee"/> <area mc_name="TX" title="Texas"/> <area mc_name="UT" title="Utah"/> <area mc_name="VT" title="Vermont"/> <area mc_name="VA" title="Virginia"/> <area mc_name="WA" title="Washington"/> <area mc_name="WV" title="West Virginia"/> <area mc_name="WI" title="Wisconsin"/><area mc_name="WY" title="Wyoming"/></areas><labels> <label x="0" y="60" width="100%" color="#FFFFFF" text_size="18"> <text>Map of the United States of America</text> </label></labels></map> As you can see, we have defined regions (states) on the map of USA, and towards the end of the file, we have added a label for the map. Select Yes for the Published field in the Details section. When you are done making these changes click on the Save button to save these changes. Now we will look into the map settings file. Map data files for countries are available with the amMap package. Thus, if you download amMap 2.5.1, you will get the map settings files for different countries. For example, the map data file for USA will be in the amMap_2.5.1/examples/_countries/usa folder.  
Read more
  • 0
  • 0
  • 4599

article-image-users-roles-and-pages-dotnetnuke-5-extension
Packt
14 Apr 2010
8 min read
Save for later

Users, Roles, and Pages in DotNetNuke 5- An Extension

Packt
14 Apr 2010
8 min read
Understanding DotNetNuke roles and role groups We just discussed how to add a user to your site. But are all users created equal? To understand how users are allowed to interact with the portal, we will need to take a look at what a role is and how it factors into the portal. There are plenty of real-world examples of roles we can look at. A police station, for example, can have sergeants, patrol cops, and detectives, and with each position come different responsibilities and privileges. In a police station, there are multiple people filling those positions (roles) with each of those belonging to the same position (role) sharing the same set of responsibilities and privileges. Roles in our portal work the same way. Roles are set up to divide the responsibilities needed to run your portal, as well as to provide different content to different groups of users of your portal. We want our portal to be easy for the administrators to manage. To do this, we will need to settle on the different user roles needed for our site. To determine this, we first need to decide on the different types of users who will access the portal. We will detail these user types in the following list: Administrator: The administrators will have very high security. They will be able to modify, delete, or move anything on the site. They will be able to add and delete users and control all security settings. (This role comes built-in with DotNetNuke.) The default administrator account is created during the creation of the portal. Home Page Admin: The home page admins will have the ability to modify only the information on the home page. They will be responsible for changing what users see when they first access your site. (We will be adding this role.) Forum Admin: The forum moderators will have the ability to monitor and modify posts in your forum. They will have the ability to approve or disapprove messages posted. (We will be adding this role.) Registered User: The registered users will be able to post messages in the forum and be able to access sections of the site set aside for registered users only. (This role comes built into DotNetNuke; however, we would need to provide the proper permissions to this role to perform the mentioned tasks.) Unauthenticated User: The unauthenticated user is the most basic of all the user types. Any person browsing your site will fall under this category, until they have successfully logged in to the site. This user type will be able to browse certain sections of your portal, but will be restricted from posting in the forum and will not be allowed in the Registered Users Only section. (This role comes built into DotNetNuke; however, we would need to provide the proper permissions to this role to perform the mentioned tasks.) Once you formulate the different user roles that will access the site, you will need to re strict users' access. For example, we only want the Home Page Admin to be able to edit items on the home page. To accomplish this, DotNetNuke uses role-based security. Role-based security allows you to give access to portions of your website based on what role the user belongs to. User-based security is also available per page or content section of the portal. However, the benefit of using a role-based security method is that you only have to define the access privileges for a role once. Then you just need to add users to that role and they will possess the privileges that the role defines. The following diagram gives you an idea of how this works: Looking at the diagram, we notice two things: Users can be assigned to more than one role More than one user can be assigned to a single role This gives us great flexibility when deciding on the authorization that users will possess in our portal. To create the roles we have detailed, sign in with an administrator account, and select ADMIN | Security Roles on the main menu or click the Roles link in the Common Tasks section of the control panel. This is available on the top of every DNN page for authorized users. You might need to click the double down arrows on the top-right of the page to maximize the panel. The Security Roles page appears as shown in the following screenshot: Notice that DotNetNuke comes with three roles already built into the system: the Administrators role (which we have been using), the Registered Users role, and the Subscribers role. Before we jump right into creating the role, let us first discuss the role group feature of DotNetNuke, as we will be using this feature when we create our roles. A role group is a collection of roles used, mainly, in large sites with a large number of roles, as a way of managing the roles more effectively. For the site we are building, the benefit of using role groups will be minimal. However, in order to demonstrate how to use them, we will create one for our administrative roles. While on the Security Roles page, click the Add New Role Group link located below the list of roles. This will present us with the Edit Role Group page containing two form fields. Once these fields are filled out, click the Update link at the bottom of this page. A Filer By Role Group drop-down list should now be displayed above the list of roles on the Security Roles page. If you select the Administrative Roles group that we just created, two noticeable things happen on this page. Firstly, the list of roles become empty as no roles are currently assigned to this role group. Secondly, an edit (pencil) icon and a delete (red cross X) icon now appear next to the drop-down list. These icons can be used to update or remove any custom role groups that exist on the site. These red 'X' icons are standard icons used throughout DotNetNuke to represent the ability to delete/remove items. Now that we have created our role group, we want to create the role for Home Page Admin. To do this, you again have two choices. Either select Add New Role from the dropdown in the upper left, or click on the Add New Role link. This will bring up the Edit Security Roles page, as shown in the next screenshot. We will use this page to create the Home Page Admin role that we need. The Basic Settings shown in the screenshot are as follows: Role Name: Make the name of your role short, but descriptive. The name should attempt to convey its purpose. Description: Here you may detail the responsibilities of the role. Role Group: Set this field to the Administrative Roles group that we created earlier. Public Role: Checking this will give registered users of your site the ability to sign up for this role themselves. We will be creating a Newsletter role and will demonstrate how this works when it is created. Auto Assignment: If this is checked, users will automatically be assigned to this role as soon as they register for your portal. CAUTION Do not check the Public Role and Auto Assignment checkboxes, unless you are sure. Checking these options would allow all users of the site to become members of the associated role, which would grant them the privileges assigned to the role. As we want to decide who will be able to modify our home page, we will leave both of these unchecked. To save the settings, click on the Update link. The Advanced Settings section allows you to set up a fee for certain security roles. We will not be configuring any of these settings for this particular exercise. However, we will briefly discuss how these settings can be used in Assigning security roles to users section. Now to complete the roles that we will require for Coffee Connections, we will add two more security roles The first role will be called Newsletter. We will be using this role to allow users to sign up for the newsletter we will be hosting at the Coffee Connections site. Set up the security role with the following information: Role Name: Newsletter Description: Allows users to register for the Coffee Connections Newsletter Role Group: Leave this as the default < Global Roles > as it is not going to be used to grant administrative rights to its members Public Role: Yes (checked) Auto Assignment: No (unchecked) Click on the Update link to save this role. The second role will be called Forum Admin. We will be using this role to administer the forums at the Coffee Connections site. Set up the security role with the following information: Role Name: Forum Admin Description: Allows user to administer Coffee Connections Forum Role Group: Administrative Roles Public Role: No (unchecked) Auto Assignment: No (unchecked)
Read more
  • 0
  • 1
  • 4591

article-image-json-jsonnet
Packt
25 Jun 2015
16 min read
Save for later

JSON with JSON.Net

Packt
25 Jun 2015
16 min read
In this article by Ray Rischpater, author of the book JavaScript JSON Cookbook, we show you how you can use strong typing in your applications with JSON using C#, Java, and TypeScript. You'll find the following recipes: How to deserialize an object using Json.NET How to handle date and time objects using Json.NET How to deserialize an object using gson for Java How to use TypeScript with Node.js How to annotate simple types using TypeScript How to declare interfaces using TypeScript How to declare classes with interfaces using TypeScript Using json2ts to generate TypeScript interfaces from your JSON (For more resources related to this topic, see here.) While some say that strong types are for weak minds, the truth is that strong typing in programming languages can help you avoid whole classes of errors in which you mistakenly assume that an object of one type is really of a different type. Languages such as C# and Java provide strong types for exactly this reason. Fortunately, the JSON serializers for C# and Java support strong typing, which is especially handy once you've figured out your object representation and simply want to map JSON to instances of classes you've already defined. We use Json.NET for C# and gson for Java to convert from JSON to instances of classes you define in your application. Finally, we take a look at TypeScript, an extension of JavaScript that provides compile-time checking of types, compiling to plain JavaScript for use with Node.js and browsers. We'll look at how to install the TypeScript compiler for Node.js, how to use TypeScript to annotate types and interfaces, and how to use a web page by Timmy Kokke to automatically generate TypeScript interfaces from JSON objects. How to deserialize an object using Json.NET In this recipe, we show you how to use Newtonsoft's Json.NET to deserialize JSON to an object that's an instance of a class. We'll use Json.NET because although this works with the existing .NET JSON serializer, there are other things that I want you to know about Json.NET, which we'll discuss in the next two recipes. Getting ready To begin, you need to be sure you have a reference to Json.NET in your project. The easiest way to do this is to use NuGet; launch NuGet, search for Json.NET, and click on Install, as shown in the following screenshot: You'll also need a reference to the Newonsoft.Json namespace in any file that needs those classes with a using directive at the top of your file: usingNewtonsoft.Json; How to do it… Here's an example that provides the implementation of a simple class, converts a JSON string to an instance of that class, and then converts the instance back into JSON: using System; usingNewtonsoft.Json;   namespaceJSONExample {   public class Record {    public string call;    public double lat;    public double lng; } class Program {    static void Main(string[] args)      {        String json = @"{ 'call': 'kf6gpe-9',        'lat': 21.9749, 'lng': 159.3686 }";          var result = JsonConvert.DeserializeObject<Record>(          json, newJsonSerializerSettings            {        MissingMemberHandling = MissingMemberHandling.Error          });        Console.Write(JsonConvert.SerializeObject(result));          return;        } } } How it works… In order to deserialize the JSON in a type-safe manner, we need to have a class that has the same fields as our JSON. The Record class, defined in the first few lines does this, defining fields for call, lat, and lng. The Newtonsoft.Json namespace provides the JsonConvert class with static methods SerializeObject and DeserializeObject. DeserializeObject is a generic method, taking the type of the object that should be returned as a type argument, and as arguments the JSON to parse, and an optional argument indicating options for the JSON parsing. We pass the MissingMemberHandling property as a setting, indicating with the value of the enumeration Error that in the event that a field is missing, the parser should throw an exception. After parsing the class, we convert it again to JSON and write the resulting JSON to the console. There's more… If you skip passing the MissingMember option or pass Ignore (the default), you can have mismatches between field names in your JSON and your class, which probably isn't what you want for type-safe conversion. You can also pass the NullValueHandling field with a value of Include or Ignore. If Include, fields with null values are included; if Ignore, fields with Null values are ignored. See also The full documentation for Json.NET is at http://www.newtonsoft.com/json/help/html/Introduction.htm. Type-safe deserialization is also possible with JSON support using the .NET serializer; the syntax is similar. For an example, see the documentation for the JavaScriptSerializer class at https://msdn.microsoft.com/en-us/library/system.web.script.serialization.javascriptserializer(v=vs.110).aspx. How to handle date and time objects using Json.NET Dates in JSON are problematic for people because JavaScript's dates are in milliseconds from the epoch, which are generally unreadable to people. Different JSON parsers handle this differently; Json.NET has a nice IsoDateTimeConverter that formats the date and time in ISO format, making it human-readable for debugging or parsing on platforms other than JavaScript. You can extend this method to converting any kind of formatted data in JSON attributes, too, by creating new converter objects and using the converter object to convert from one value type to another. How to do it… Simply include a new IsoDateTimeConverter object when you call JsonConvert.Serialize, like this: string json = JsonConvert.SerializeObject(p, newIsoDateTimeConverter()); How it works… This causes the serializer to invoke the IsoDateTimeConverter instance with any instance of date and time objects, returning ISO strings like this in your JSON: 2015-07-29T08:00:00 There's more… Note that this can be parsed by Json.NET, but not JavaScript; in JavaScript, you'll want to use a function like this: Function isoDateReviver(value) { if (typeof value === 'string') { var a = /^(d{4})-(d{2})-(d{2})T(d{2}):(d{2}):(d{2}(?:.d*)?)(?:([+-])(d{2}):(d{2}))?Z?$/ .exec(value); if (a) {      var utcMilliseconds = Date.UTC(+a[1],          +a[2] - 1,          +a[3],          +a[4],          +a[5],          +a[6]);        return new Date(utcMilliseconds);    } } return value; } The rather hairy regular expression on the third line matches dates in the ISO format, extracting each of the fields. If the regular expression finds a match, it extracts each of the date fields, which are then used by the Date class's UTC method to create a new date. Note that the entire regular expression—everything between the/characters—should be on one line with no whitespace. It's a little long for this page, however! See also For more information on how Json.NET handles dates and times, see the documentation and example at http://www.newtonsoft.com/json/help/html/SerializeDateFormatHandling.htm. How to deserialize an object using gson for Java Like Json.NET, gson provides a way to specify the destination class to which you're deserializing a JSON object. Getting ready You'll need to include the gson JAR file in your application, just as you would for any other external API. How to do it… You use the same method as you use for type-unsafe JSON parsing using gson using fromJson, except you pass the class object to gson as the second argument, like this: // Assuming we have a class Record that looks like this: /* class Record { private String call; private float lat; private float lng;    // public API would access these fields } */   Gson gson = new com.google.gson.Gson(); String json = "{ "call": "kf6gpe-9", "lat": 21.9749, "lng": 159.3686 }"; Record result = gson.fromJson(json, Record.class); How it works… The fromGson method always takes a Java class. In the example in this recipe, we convert directly to a plain old Java object that our application can use without needing to use the dereferencing and type conversion interface of JsonElement that gson provides. There's more… The gson library can also deal with nested types and arrays as well. You can also hide fields from being serialized or deserialized by declaring them transient, which makes sense because transient fields aren't serialized. See also The documentation for gson and its support for deserializing instances of classes is at https://sites.google.com/site/gson/gson-user-guide#TOC-Object-Examples. How to use TypeScript with Node.js Using TypeScript with Visual Studio is easy; it's just part of the installation of Visual Studio for any version after Visual Studio 2013 Update 2. Getting the TypeScript compiler for Node.js is almost as easy—it's an npm install away. How to do it… On a command line with npm in your path, run the following command: npm install –g typescript The npm option –g tells npm to install the TypeScript compiler globally, so it's available to every Node.js application you write. Once you run it, npm downloads and installs the TypeScript compiler binary for your platform. There's more… Once you run this command to install the compiler, you'll have the TypeScript compiler tsc available on the command line. Compiling a file with tsc is as easy as writing the source code and saving in a file that ends in .ts extension, and running tsc on it. For example, given the following TypeScript saved in the file hello.ts: function greeter(person: string) { return "Hello, " + person; }   var user: string = "Ray";   console.log(greeter(user)); Running tschello.ts at the command line creates the following JavaScript: function greeter(person) { return "Hello, " + person; }   var user = "Ray";   console.log(greeter(user)); Try it! As we'll see in the next section, the function declaration for greeter contains a single TypeScript annotation; it declares the argument person to be string. Add the following line to the bottom of hello.ts: console.log(greeter(2)); Now, run the tschello.ts command again; you'll get an error like this one: C:UsersrarischpDocumentsnode.jstypescripthello.ts(8,13): error TS2082: Supplied parameters do not match any signature of call target:        Could not apply type 'string' to argument 1 which is         of type 'number'. C:UsersrarischpDocumentsnode.jstypescripthello.ts(8,13): error TS2087: Could not select overload for 'call' expression. This error indicates that I'm attempting to call greeter with a value of the wrong type, passing a number where greeter expects a string. In the next recipe, we'll look at the kinds of type annotations TypeScript supports for simple types. See also The TypeScript home page, with tutorials and reference documentation, is at http://www.typescriptlang.org/. How to annotate simple types using TypeScript Type annotations with TypeScript are simple decorators appended to the variable or function after a colon. There's support for the same primitive types as in JavaScript, and to declare interfaces and classes, which we will discuss next. How to do it… Here's a simple example of some variable declarations and two function declarations: function greeter(person: string): string { return "Hello, " + person; }   function circumference(radius: number) : number { var pi: number = 3.141592654; return 2 * pi * radius; }   var user: string = "Ray";   console.log(greeter(user)); console.log("You need " + circumference(2) + " meters of fence for your dog."); This example shows how to annotate functions and variables. How it works… Variables—either standalone or as arguments to a function—are decorated using a colon and then the type. For example, the first function, greeter, takes a single argument, person, which must be a string. The second function, circumference, takes a radius, which must be a number, and declares a single variable in its scope, pi, which must be a number and has the value 3.141592654. You declare functions in the normal way as in JavaScript, and then add the type annotation after the function name, again using a colon and the type. So, greeter returns a string, and circumference returns a number. There's more… TypeScript defines the following fundamental type decorators, which map to their underlying JavaScript types: array: This is a composite type. For example, you can write a list of strings as follows: var list:string[] = [ "one", "two", "three"]; boolean: This type decorator can contain the values true and false. number: This type decorator is like JavaScript itself, can be any floating-point number. string: This type decorator is a character string. enum: An enumeration, written with the enum keyword, like this: enumColor { Red = 1, Green, Blue }; var c : Color = Color.Blue; any: This type indicates that the variable may be of any type. void: This type indicates that the value has no type. You'll use void to indicate a function that returns nothing. See also For a list of the TypeScript types, see the TypeScript handbook at http://www.typescriptlang.org/Handbook. How to declare interfaces using TypeScript An interface defines how something behaves, without defining the implementation. In TypeScript, an interface names a complex type by describing the fields it has. This is known as structural subtyping. How to do it… Declaring an interface is a little like declaring a structure or class; you define the fields in the interface, each with its own type, like this: interface Record { call: string; lat: number; lng: number; }   Function printLocation(r: Record) { console.log(r.call + ': ' + r.lat + ', ' + r.lng); }   var myObj = {call: 'kf6gpe-7', lat: 21.9749, lng: 159.3686};   printLocation(myObj); How it works… The interface keyword in TypeScript defines an interface; as I already noted, an interface consists of the fields it declares with their types. In this listing, I defined a plain JavaScript object, myObj and then called the function printLocation, that I previously defined, which takes a Record. When calling printLocation with myObj, the TypeScript compiler checks the fields and types each field and only permits a call to printLocation if the object matches the interface. There's more… Beware! TypeScript can only provide compile-type checking. What do you think the following code does? interface Record { call: string; lat: number; lng: number; }   Function printLocation(r: Record) { console.log(r.call + ': ' + r.lat + ', ' + r.lng); }   var myObj = {call: 'kf6gpe-7', lat: 21.9749, lng: 159.3686}; printLocation(myObj);   var json = '{"call":"kf6gpe-7","lat":21.9749}'; var myOtherObj = JSON.parse(json); printLocation(myOtherObj); First, this compiles with tsc just fine. When you run it with node, you'll see the following: kf6gpe-7: 21.9749, 159.3686 kf6gpe-7: 21.9749, undefined What happened? The TypeScript compiler does not add run-time type checking to your code, so you can't impose an interface on a run-time created object that's not a literal. In this example, because the lng field is missing from the JSON, the function can't print it, and prints the value undefined instead. This doesn't mean that you shouldn't use TypeScript with JSON, however. Type annotations serve a purpose for all readers of the code, be they compilers or people. You can use type annotations to indicate your intent as a developer, and readers of the code can better understand the design and limitation of the code you write. See also For more information about interfaces, see the TypeScript documentation at http://www.typescriptlang.org/Handbook#interfaces. How to declare classes with interfaces using TypeScript Interfaces let you specify behavior without specifying implementation; classes let you encapsulate implementation details behind an interface. TypeScript classes can encapsulate fields or methods, just as classes in other languages. How to do it… Here's an example of our Record structure, this time as a class with an interface: class RecordInterface { call: string; lat: number; lng: number;   constructor(c: string, la: number, lo: number) {} printLocation() {}   }   class Record implements RecordInterface { call: string; lat: number; lng: number; constructor(c: string, la: number, lo: number) {    this.call = c;    this.lat = la;    this.lng = lo; }   printLocation() {    console.log(this.call + ': ' + this.lat + ', ' + this.lng); } }   var myObj : Record = new Record('kf6gpe-7', 21.9749, 159.3686);   myObj.printLocation(); How it works… The interface keyword, again, defines an interface just as the previous section shows. The class keyword, which you haven't seen before, implements a class; the optional implements keyword indicates that this class implements the interface RecordInterface. Note that the class implementing the interface must have all of the same fields and methods that the interface prescribes; otherwise, it doesn't meet the requirements of the interface. As a result, our Record class includes fields for call, lat, and lng, with the same types as in the interface, as well as the methods constructor and printLocation. The constructor method is a special method called when you create a new instance of the class using new. Note that with classes, unlike regular objects, the correct way to create them is by using a constructor, rather than just building them up as a collection of fields and values. We do that on the second to the last line of the listing, passing the constructor arguments as function arguments to the class constructor. See also There's a lot more you can do with classes, including defining inheritance and creating public and private fields and methods. For more information about classes in TypeScript, see the documentation at http://www.typescriptlang.org/Handbook#classes. Using json2ts to generate TypeScript interfaces from your JSON This last recipe is more of a tip than a recipe; if you've got some JSON you developed using another programming language or by hand, you can easily create a TypeScript interface for objects to contain the JSON by using Timmy Kokke's json2ts website. How to do it… Simply go to http://json2ts.com and paste your JSON in the box that appears, and click on the generate TypeScript button. You'll be rewarded with a second text-box that appears and shows you the definition of the TypeScript interface, which you can save as its own file and include in your TypeScript applications. How it works… The following figure shows a simple example: You can save this typescript as its own file, a definition file, with the suffix .d.ts, and then include the module with your TypeScript using the import keyword, like this: import module = require('module'); Summary In this article we looked at how you can adapt the type-free nature of JSON with the type safety provided by languages such as C#, Java, and TypeScript to reduce programming errors in your application. Resources for Article: Further resources on this subject: Playing with Swift [article] Getting Started with JSON [article] Top two features of GSON [article]
Read more
  • 0
  • 0
  • 4590
article-image-remotely-preview-and-test-mobile-web-pages-actual-devices-adobe-edge-inspect
Packt
19 Mar 2013
8 min read
Save for later

Remotely Preview and test mobile web pages on actual devices with Adobe Edge Inspect

Packt
19 Mar 2013
8 min read
(For more resources related to this topic, see here.) Creating a sample mobile web application page for our testing purpose Let's first check out a very simple demo application that we will set up for our test. Our demo application will be a very simple structured HTML page targeted for mobile browsers. The main purpose behind building it is to showcase the various inspection and debugging capabilities of Adobe Edge Inspect. Now, let's get started by creating a directory named adobe_inspect_test inside your local web server's webroot directory. Since I have WAMP server installed on my Windows computer, I have created the directory inside the www folder (which is the webroot for WAMP server). Create a new empty HTML file named index.html inside the adobe_inspect_test directory. Fill it with the following code snippet: <html> <head> <title>Simple demo</title> <meta name="viewport" content="width=device-width, initial-scale=1.0, minimum-scale=1.0, maximum-scale=1.0"/> <style type='text/css' > html, body, p, div, br, input{ margin:0; padding:0; } html,body{ font-family:Helvetica; font-size:14px; font-weight:bold; color:#222; width:100%; height:100%; } #wrapper{ width:100%; height:100%; overflow:hidden; } .divStack{ padding:10px; margin:20px; text-align:center; } #div1{ background:#ABA73C; } #div2{ background:#606873; } input[type=button] { padding:5px; } <script type="text/javascript"> window.addEventListener('load',init,false); function init() { document.getElementById('btn1').addEventListener ('click',button1Clicked,false); document.getElementById('btn2').addEventListener ('click',button2Clicked,false); } function button1Clicked() { console.log('Button 1 Clicked'); } function button2Clicked() { console.log('Button 2 Clicked'); } </script> </style> </head> <body> <div id="wrapper"> <div id="div1" class="divStack"><p>First Div</p></div> <div id="div2" class="divStack"><p>Second Div</p></div> <div class="divStack"> <input id="btn1" type="button" value="Button1" /> <input id="btn2" type="button" value="Button2" /> </div> </div> </body> </html> As you can make out we have two div elements (#div1 and #div2) and two buttons (#btn1 and #btn2). We will play around with the two div elements, make changes to their HTML markup and CSS styles when we start remote debugging. And then with our two buttons we will see how we can check JavaScript console log messages remotely. Adobe Edge Inspect is compatible with Google Chrome only, so throughout our testing we will be running our demo application in Chrome. Let’s run the index.html page from our web server. This is how it looks as of now: Now, let’s check if the two buttons are generating the console log messages. For that, right-click inside Chrome and select Inspect element. This will open up the Chrome web developer tools window. Click on the Console tab. Now click on the two buttons and you will see console messages based on the button clicked. The following screenshot shows it: Everything seems to work fine and with that our demo application is ready to be tested in a mobile device with Adobe Edge Inspect. With the demo application running in Chrome and your mobile devices paired to your computer , you will instantly see the same page opening in the Edge Inspect client app in all your mobile devices. The following image shows how the page looks in an iPhone paired to my computer. Open the Edge Inspect web inspector window Now that you can see our demo application in all your paired mobile devices we are ready to remotely inspect and debug on a targeted mobile device. Click on the Edge Inspect extension icon in Chrome and select a device for inspection. I am selecting the iPhone from the list of paired devices. Now click on the Remote Inspection button to the right of the selected device name. The following image should help you: This will open up the Edge Inspect web inspector window also known as the weinre (WEb INspector REmote) web inspector. This looks very similar to the Chrome web inspector window, doesn't it? So if you have experience with the Chrome web debugging tools then this should look familiar to you. The following screenshot shows it: As you can see, by default the Remote tab opens up. The title bar says target not connected. So, although your mobile device is paired it is not yet ready for remote inspection. Under Devicess you can see your paired mobile device name with the URL of the page opened in it. Now, click on the device name to connect it. As soon as you do that, you will notice that it turns green. Congratulations! You are now ready for remote inspection as your mobile device is connected. The following screenshot shows it: Changing HTML markup and viewing the results Your target mobile device (the iPhone in my case) and the debug client (the Edge Inspect web inspector) are connected to the weinre debug server now and we are ready to make some changes to the HTML. Click on the Elements tab in the Edge Inspect web inspector on your computer and you will see the HTML markup of our demo application in it. It may take a few seconds to load since the communication is via Ajax over HTTP. Now, hover your mouse over any element and you would instantly see it being highlighted in your mobile device. Now let's make some changes to the HTML and see if it is reflected in the target mobile device. Let's change the text inside #div2 from Second Div to Second Div edited. The following screenshot shows the change made in #div2 in the Edge Inspect web inspector in my computer: And magic! It is changed on the iPhone too. Cool, isn't it? The following screenshot shows the new text inside #div2. This is what remote debugging is all about. We are utilizing the power of Adobe Edge Inspect to overcome the limitations of pure debugging tools in mobile browsers. Instead make changes on your computer and see them directly in your handset. Similarly you can make other markup changes such as removing an HTML element, changing element properties, and so on. Changing CSS style rules Now, let's make some CSS changes. Select an element in the Elements tab and the corresponding CSS styles are listed on the right. Now make changes to the style and the results will reflect on the mobile device as well. I have selected the First Div (#div1) element. Let's remove its padding. Uncheck the checkbox against padding and the 10 px padding is removed. The following screenshot shows the CSS change made in the Edge Inspect web inspector on my computer: And the following image shows the result being reflected on the iPhone instantly: Similarly, you can change other style rules such as width, height, padding, and margin and see the changes directly on your device. Viewing console log messages Click on the Console tab in the Edge Inspect web inspector to open it. Now click/tap on the buttons one by one on your mobile device. You will see that log messages are being printed on the console for the corresponding button clicked/tapped on the mobile device. This way you can debug JavaScript code remotely. Although Edge Inspect lacks JavaScript debugging using breakpoints (which would have been really handy had we been able to watch local and global variables, function arguments by pausing the execution and observing the state) but nevertheless, by using the console messages you can at least know that your JavaScript code is executing correctly to the point where the log is generated. So basic script debugging can be done. The following screenshot shows the console messages printed remotely from the paired iPhone: Similarly you can target another device for remote inspection and see the changes directly in the device. With that we have covered how to remotely debug and test web applications running on a mobile device browser. Summary In this article, we have seen the most important feature of Adobe Edge Inspect, that is, how we can remotely preview, inspect, and debug a mobile web page. We first created a sample mobile web page, edited it using Adobe Edge Inspect, changed the HTML markup and CSS styles by remote testing, and saw the results as to how this special feature of Adobe Edge Inspect is utilized. Resources for Article : Further resources on this subject: Adapting to User Devices Using Mobile Web Technology [Article] Getting Started on UDK with iOS [Article] Getting Started with Internet Explorer Mobile [Article]
Read more
  • 0
  • 0
  • 4588

article-image-article-personalize-your-pbx-using-freepbx-features
Packt
26 Oct 2009
4 min read
Save for later

Personalize Your Own PBX Using FreePBX Features

Packt
26 Oct 2009
4 min read
Let's get started. CallerID Lookup Sources Caller ID lookup sources supplement the caller ID name information that is sent by most telephone companies. A caller ID lookup source contains a list of phone numbers matched with names. When FreePBX receives a call, it can query a lookup source with the number of the caller. If the caller is on the lookup source's list, a name is returned that will be sent along with the call wherever the call gets routed to. The name will be visible on a phone's caller ID display (if the phone supports caller ID), and is also visible in the FreePBX call detail records. In order to set up a caller ID lookup source, click on the CallerID Lookup Sources link under the Inbound Call Control section of the navigation menu on the left side of the FreePBX interface as shown in the following screenshot: The Add Source screen has three common configuration options: Source Description Source type Cache results Source Description is used to identify this lookup source when it is being selected as a caller ID lookup source during the configuration of an inbound route. Source type is used to select the method that this source will use to obtain caller ID name information. FreePBX allows a lookup source to use one of the following methods: ENUM: FreePBX will use whichever ENUM servers are configured in /etc/asterisk/enum.conf to return caller ID name information. By default, this file contains the e164.arpa and e164.org zones for lookups. All ENUM servers in the enum.conf file will be queried. HTTP: FreePBX will query a web service for caller ID name information using the HTTP protocol. A lookup source that uses HTTP to query for information can use services such as Google Phonebook or online versions of the white/yellow pages to return caller ID names. When HTTP is selected as the source type, six additional options will appear for configuration. These options are discussed in the HTTP source type section. MySQL: FreePBX will connect to a MySQL database to query for caller ID name information. Usually, this will be a database belonging to a Customer Relationship Management (CRM) software package in which all customer information is stored. When MySQL is selected as the Source type, five additional options will appear for configuration. These options are discussed later in the MySQL source type section. SugarCRM: As of FreePBX version 2.5.1, this option is not yet implemented. In the future, this Source type option will allow FreePBX to connect to the database used by the SugarCRM software package to query for caller ID name information. If the Cache results checkbox is selected, then when a lookup source returns results they will be cached in the local AstDB database for quicker retrieval the next time the same number is looked up. Note that values cached in the AstDB will persist past a restart of Asterisk and a reboot of the PBX. Once a caller ID name has been cached, FreePBX will always return that name even if the name in the lookup source changes. Caching must be disabled for a new caller ID name to be returned from the lookup source. Once all configuration options have been filled out, click on the Submit Changes button followed by the orange-colored Apply Configuration Changes bar to make the new lookup source available to inbound routes. Now that we have an available lookup source, we can configure an inbound route to use this source to set caller ID information. Click on the Inbound Routes link under the Inbound Call Control section of the navigation menu on the left side of the FreePBX interface as shown in the following screenshot: Click the name of the inbound route that will use the new lookup source in the menu on the right side of the page (in this example, DID 5551234567) as shown in the following screenshot: Scroll down the page to the CID Lookup Source section. Select the name of the new lookup source from the Source drop-down menu: Click on the Submit button at the bottom of the page, followed by the orange-colored Apply Configuration Changes bar at the top of the page. Calls that are routing using this inbound route will now query our new lookup source for caller ID name information.
Read more
  • 0
  • 0
  • 4580
Modal Close icon
Modal Close icon