Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-professional-plone-development-foreword-alexander-limi
Packt
22 Oct 2009
9 min read
Save for later

Professional Plone Development: Foreword by Alexander Limi

Packt
22 Oct 2009
9 min read
  Foreword by Alexander Limi, co-founder of Plone It's always fascinating how life throws you a loop now and then that changes your future in a profound way—and you don't realize it at the time. As I sit here almost six years after the Plone project started, it seems like a good time to reflect on how the last years changed everything, and some of the background of why you are holding this book in your hands—because the story about the Plone community is at least as remarkable as the software itself. It all started out in a very classic way—I had just discovered Zope and Python, and wanted to build a simple web application to teach myself how they worked. This was back in 1999, when Zope was still a new, unproven technology, and had more than a few rough spots. I have never been a programmer, but Python made it all seem so simple that I couldn't resist trying to build a simple web application with it. After reading what I could find of documentation at the time, I couldn't quite figure it out—so I ended up in the online Zope chat rooms to see if I could get any help with building my web application. Little did I know that what happened that evening would change my life in a significant way. I met Alan Runyan online, and after trying to assist me, we ended up talking about music instead. We also reached the conclusion that I should focus on what I was passionate about—instead of coding, I wanted to build great user interfaces and make things easy to use. Alan wanted to provide the plumbing to make the system work. For some reason, it just clicked at that point, and we collaborated online and obsessed over the details of the system for months. External factors were probably decisive here too: I was without a job, and my girlfriend had left me a few months prior; Alan had just given up his job as a Java programmer at a failed dot-com company and decided to start his own company doing Python instead—so we both ended up pouring every living hour into the project, and moving at a break-neck pace towards getting the initial version out. We ended up getting a release ready just before the EuroPython Conference in 2002, and this was actually the first time I met Alan in person. We had been working on Plone for the past year just using email and IRC chat—two technologies that are still cornerstones of Plone project communication. I still remember the delight in discovering that we had excellent communication in person as well. What happened next was somewhat surreal for people new to this whole thing: we were sitting in the audience in the "State of Zope" talk held by Paul Everitt. He got to the part of his talk where he called attention to people and projects that he was especially impressed with. When he called out our names and talked about how much he liked Plone—which at this point was still mostly the effort of a handful of people—it made us feel like we were really onto something. This was our defining moment. For those of you who don't know Paul, he is one of the founders of Zope Corporation, and would go on to become our most tireless and hard-working supporter. He got involved in all the important steps that would follow—he put a solid legal and marketing story in place and helped create the Plone Foundation—and did some great storytelling along the way. There is no way to properly express how much Paul has meant to us personally—and to Plone—five years later. His role was crucial in the story of Plone's success, and the project would not be where it is now without him. Looking back, it sounds a bit like the classic romanticized start-up stories of Silicon Valley, except that we didn't start a company together. We chose to start two separate companies—in hindsight a very good decision. It never ceases to amaze me how much of an impact the project has had since. We are now an open-source community of hundreds of companies doing Plone development, training, and support. In just the past month, large companies like Novell and Akamai—as well as government agencies like the CIA, and NGOs like Oxfam—have revealed that they are using Plone for their web content management, and more will follow. The Plone Network site, plone.net, lists over 150 companies that offer Plone services, and the entire ecosystem is estimated to have revenues in the hundreds of millions of US dollars annually. This year's Plone Conference in Naples, Italy is expected to draw over 300 developers and users from around the world. Not bad for a system that was conceived and created by a handful of people standing on the shoulders of the giants of the Zope and Python communities. But the real story here is about an amazing community of people—individuals and organizations, large and small—all coming together to create the best content management system on the planet. We meet in the most unlikely locations—from ancient castles and mountain-tops in Austria, to the archipelagos and fjords of Norway, the sandy beaches of Brazil, and the busy corporate offices of Google in Silicon Valley. These events are at the core of the Plone experience, and developers nurture deep friendships within the community. I can say without a doubt that these are the smartest, kindest, most amazing people I have ever had the pleasure to work with. One of those people is Martin Aspeli, whose book you are reading right now. Even though we're originally from the same country, we didn't meet that way. Martin was at the time—and still is—living in London. He had contributed some code to one of our community projects a few months prior, and suggested that we should meet up when he was visiting his parents in Oslo, Norway. It was a cold and dark winter evening when we met at the train station—and ended up talking about how to improve Plone and the community process at a nearby café. I knew there and then that Martin would become an important part of the Plone project. Fast-forward a few years, and Martin has risen to become one of Plone's most important and respected—not to mention prolific—developers. He has architected and built several core components of the Plone 3 release; he has been one of the leaders on the documentation team, as well as an active guide in Plone's help forums. He also manages to fit in a day job at one of the "big four" consulting companies in the world. On top of all this, he was secretly working on a book to coincide with the Plone 3.0 release—which you are now the lucky owner of. This brings me to why this book is so unique, and why we are lucky to have Martin as part of our community. In the fast-paced world of open-source development—and Plone in particular—we have never had the chance to have a book that was entirely up-to-date on all subjects. There have been several great books in the past, but Martin has raised the bar further—by using the writing of a book to inform the development of Plone. If something didn't make sense, or was deemed too complex for the problem it was trying to solve—he would update that part of Plone so that it could be explained in simpler terms. It made the book better, and it has certainly made Plone better. Another thing that sets Martin's book apart is his unparalleled ability to explain advanced and powerful concepts in a very accessible way. He has years of experience developing with Plone and answering questions on the support forums, and is one of the most patient and eloquent writers around. He doesn't give up until you know exactly what's going on. But maybe more than anything, this book is unique in its scope. Martin takes you through every step from installing Plone, through professional development practices, unit tests, how to think about your application, and even through some common, non-trivial tasks like setting up external caching proxies like Varnish and authentication mechanisms like LDAP. In sum, this book teaches you how to be an independent and skillful Plone developer, capable of running your own company—if that is your goal—or provide scalable, maintainable services for your existing organization. Five years ago, I certainly wouldn't have imagined sitting here, jet-lagged and happy in Barcelona this Sunday morning after wrapping up a workshop to improve the multilingual components in Plone. Nor would I have expected to live halfway across the world in San Francisco and work for Google, and still have time to lead Plone into the future. Speaking of which, how does the future of Plone look like in 2007? Web development is now in a state we could only have dreamt about five years ago—and the rise of numerous great Python web frameworks, and even non-Python solutions like Ruby on Rails has made it possible for the Plone community to focus on what it excels at: content and document management, multilingual content, and solving real problems for real companies—and having fun in the process. Before these frameworks existed, people would often try to do things with Plone that it was not built or designed to do—and we are very happy that solutions now exist that cater to these audiences, so we can focus on our core expertise. Choice is good, and you should use the right tool for the job at hand. We are lucky to have Martin, and so are you. Enjoy the book, and I look forward to seeing you in our help forums, chat rooms, or at one of the many Plone conferences and workshops around the world. — Alexander Limi, Barcelona, July 2007 http://limi.net Alexander Limi co-founded the Plone project with Alan Runyan, and continues to play a key role in the Plone community. He is Plone's main user interface developer, and currently works as a user interaction designer at Google in California.
Read more
  • 0
  • 0
  • 5525

article-image-xpath-support-oracle-jdeveloper-xdk-11g
Packt
15 Oct 2009
11 min read
Save for later

XPath Support in Oracle JDeveloper - XDK 11g

Packt
15 Oct 2009
11 min read
With SAX and DOM APIs, node lists have to be iterated over to access a particular node. Another advantage of navigating an XML document with XPath is that an attribute node may be selected directly. With DOM and SAX APIs, an element node has to be selected before an element attribute can be selected. Here we will discuss XPath support in JDeveloper. What is XPath? XPath is a language for addressing an XML document's elements and attributes. As an example, say you receive an XML document that contains the details of a shipment and you want to retrieve the element/attribute values from the XML document. You don't just want to list the values of all the nodes, but also want to output the values of specific elements or attributes. In such a case, you would use XPath to retrieve the values of those elements and attributes. XPath constructs a hierarchical structure of an XML document, a tree of nodes, which is the XPath data model. The XPath data model consists of seven node types. The different types of nodes in the XPath data model are discussed in the following table: Node Type Description Root Node The root node is the root of the DOM tree. The document element (the root element) is a child of the root node. The root node also has the processing instructions and comments as child nodes. Element Node It represents an element in an XML document. The character data, elements, processing instruction, and comments within an element are the child nodes of the element node. Attribute Node It represents an attribute other than the valign="top"> Text Node The character data within an element is a text node. A text node has at least one character of data. A whitespace is also considered as a character of data.  By default, the ignorable whitespace after the end of an element and before the start of the following element is also a text node. The ignorable whitespace can be excluded from the DOM tree built by parsing an XML document. This can be done by setting the whitespace-preserving mode to false with the setPreserveWhitespace(boolean flag) method. Comment Node It represents a comment in an XML document, except the comments within the DOCTYPE declaration. Processing Instruction Node It represents a processing instruction in an XML document except the processing instruction within the DOCTYPE declaration. The XML declaration is not considered as a processing instruction node. Namespace Node It represents a namespace mapping, which consists of a . A namespace node consists of a namespace prefix (xsd in the example) and a namespace URI (http://www.w3.org/2001/XMLSchema in the example). Specific nodes including element, attribute, and text nodes may be accessed with XPath. XPath supports nodes in a namespace. Nodes in XPath are selected with an XPath expression. An expression is evaluated to yield an object of one of the following four types: node set, Boolean, number, or string. For an introduction on XPath refer to the W3C Recommendation for XPath (http://www.w3.org/TR/xpath). As a brief review, expression evaluation in XPath is performed with respect to a context node. The most commonly used type of expression in XPath is a location path . XPath defines two types of location paths: relative location paths and absolute location paths. A relative location path is defined with respect to a context node and consists of a sequence of one or more location steps separated by "/". A location step consists of an axis, a node test, and predicates. An example of a location step is: child::journal[position()=2] In the example, the child axis contains the child nodes of the context node. Node test is the journal node set, and predicate is the second node in the journal node set. An absolute location path is defined with respect to the root node, and starts with "/". The difference between a relative location path and an absolute location path is that a relative location path starts with a location step, and an absolute location path starts with "/". XPath in Oracle XDK 11g Oracle XML Developer's Kit 11g, which is included in JDeveloper, provides the DOMParser class to parse an XML document and construct a DOM structure of the XML document. An XMLDocument object represents the DOM structure of an XML document. An XMLDocument object may be retrieved from a DOMParser object after an XML document has been parsed. The XMLDocument class provides select methods to select nodes in an XML document with an XPath expression. In this article we shall parse an example XML document with the DOMParser class, obtain an XMLDocument object for the XML document, and select nodes from the document with the XMLDocument class select methods. The different select methods in theXMLDocument class are discussed in the following table: Method Name Description selectSingleNode(String XPathExpression) Selects a single node that matches an XPath expression. If more than one node matches the specified expression, the first node is selected. Use this method if you want to select the first node that matches an XPath expression. selectNodes(String XPathExpression) Selects a node list of nodes that match a specified XPath expression. Use this method if you want to select a collection of similar nodes. selectSingleNode(String XPathExpression, NSResolver resolver) Selects a single namespace node that matches a specified XPath expression. Use this method if the XML document has nodes in namespaces and you want to select the first node, which is in a namespace and matches an XPath expression. selectNodes(String XPathExpression, NSResolver resolver) Selects a node list of nodes that match a specified XPath expression. Use this method if you want to select a collection of similar nodes that are in a namespace. The example XML document that is parsed in this article has a namespace declaration for elements in the namespace with the prefix journal. For an introduction on namespaces in XML refer to the W3C Recommendation on Namespaces in XML 1.0 (http://www.w3.org/TR/REC-xml-names/). catalog.xml, the example XML document, is shown in the following listing: <?xml version="1.0" encoding="UTF-8"?><catalog title="Oracle Magazine" publisher="Oracle Publishing"><journal:journal journal_date="November-December 2008"> <journal:article journal_section="ORACLE DEVELOPER"> <title>Instant ODP.NET Deployment</title> <author>Mark A. Williams</author></journal:article><journal:article journal_section="COMMENT"> <title>Application Server Convergence</title> <author>David Baum</author> </journal:article></journal:journal><journal date="March-April 2008"> <article section="TECHNOLOGY"> <title>Oracle Database 11g Redux</title> <author>Tom Kyte</author> </article><article section="ORACLE DEVELOPER"> <title>Declarative Data Filtering</title> <author>Steve Muench</author> </article> </journal></catalog Setting the environment Create an application (called XPath, for example) and a project (called XPath) in JDeveloper. The XPath API will be demonstrated in a Java application. Therefore, create a Java class in the XPath project with File | New. In the New Gallery window select < >Categories | General and Items | Java Class. In the Create Java Class window, specify the class name (XPathParser, for example), the package name (xpath in the example application), and click on the OK button. To develop an application with XPath, add the required libraries to the project classpath. Select the project node in Application Navigator and select Tools | Project Properties. In the Project Properties window, select the Libraries and Classpath node. To add a library, select the Add Library button. Select the Oracle XML Parser v2 library. Click on the OK button in the Project Properties window. We also need to add an XML document that is to be parsed and navigated with XPath. To add an XML document, select File | New. In the New Gallery window, select Categories | General | XML and Items | XML Document. Click on the OK button. In the Create XML File window specify the file name catalog.xml in the File Name field, and click on the OK button. Copy the catalog.xml listing to the catalog.xml file in the Application Navigator. The directory structure of the XPath project is shown in the following illustration: XPath Search In this section, we shall select nodes from the example XML document, catalog.xml, with the XPath Search tool of JDeveloper 11g. The XPath Search tool consists of an Expression field for specifying an XPath expression. Specify an XPath expression and click on OK to select nodes matching the XPath expression. The XPath Search tool has the provision to search for nodes in a specific namespace. An XML namespace is a collection of element and attribute names that are identified by a URI reference. Namespaces are specified in an XML document using namespace declarations. A namespace declaration is an > To navigate catalog.xml with XPath, select catalog.xml in the Application Navigator and select Search | XPath Search. In the following subsections, we shall select example nodes using absolute location paths and relative location paths. Use a relative location path if the XML document is large and a specifi c node is required. Also, use a relative path if the node from which subnodes are to be selected and the relative location path are known. Use an absolute location path if the XML document is small, or if the relative location path is not known. The objective is to use minimum XPath navigation. Use the minimum number nodes to navigate in order to select the required node. Selecting nodes with absolute location paths Next, we shall demonstrate with various examples of selecting nodes using XPath. As an example, select all the title elements in catalog.xml. Specify the XPath expression for selecting the title elements in the Expression field of the Apply an XPath Expression on catalog.xml window. The XPath expression to select all title elements is /catalog/journal/article/title. Click on the OK button to select the title elements. The title elements get selected. Title elements from the journal:article elements in the journal namespace do not get selected because a namespace has not been applied to the XPath expression. As an other example, select the title element in the first article element using the XPath expression /catalog/journal/article[1]/title. We are not using namespaces yet. The XPath expression is specified in the Expression field. The title of the first article element gets selected as shown in the JDeveloper output: Attribute nodes may also be selected with XPath. Attributes are selected by using the "@" prefix. As an example, select the section attribute in the first article element in the journal element. The XPath expression for selecting the section attribute is /catalog/journal/article[1]/@section and is specified in the Expression field. Click on the OK button to select the section attribute. The attribute section gets outputted in JDeveloper. Selecting nodes with relative location paths In the previous examples, an absolute location is used to select nodes. Next, we shall demonstrate selecting an element with a relative location path. As an example, select the title of the first article element in the journal element. The relative location path for selecting the title element is child::catalog/journal/article[position()=1]/title. Specifying the axis as child and node test as catalog selects all the child nodes of the catalog node and is equivalent to an absolute location path that starts with /catalog. If the child nodes of the journal node were required to be selected, specify the node test as journal. Specify the XPath expression in the Expression field and click on the OK button. The title of the first article element in the journal element gets selected as shown here: Selecting namespace nodes XPath Search also has the provision to select elements and attributes in a namespace. To illustrate, select all the title elements in the journal element (that is, in the journal namespace) using the XPath expression /catalog/journal:journal/journal:article/title. First, add the namespaces of the elements and attributes to be selected in the Namespaces text area. Prefix and URI of namespaces are added with the Add button. Specify the prefix in the Prefix column, and the URI in the URI column. Multiple namespace mappings may be added. XPath expressions that select namespace nodes are similar to no-namespace expressions, except that the namespace prefixes are included in the expressions. Elements in the default namespace, which does not have a namespace prefix, are also considered to be in a namespace. Click on the OK button to select the nodes with XPath. The title elements in the journal element (in the journal namespace) get selected and outputted in JDeveloper. Attributes in a namespace may also be selected with XPath Search. As an example, select the section attributes in the journal namespace. Specify the XPath expression to select the section attributes in the Expression field and click on the OK button. Section attributes in the journal namespace get selected.
Read more
  • 0
  • 0
  • 5524

article-image-automating-performance-analysis-yslow-and-phantomjs
Packt
10 Jun 2014
12 min read
Save for later

Automating performance analysis with YSlow and PhantomJS

Packt
10 Jun 2014
12 min read
(For more resources related to this topic, see here.) Getting ready To run this article, the phantomjs binary will need to be accessible to the continuous integration server, which may not necessarily share the same permissions or PATH as our user. We will also need a target URL. We will use the PhantomJS port of the YSlow library to execute the performance analysis on our target web page. The YSlow library must be installed somewhere on the filesystem that is accessible to the continuous integration server. For our example, we have placed the yslow.js script in the tmp directory of the jenkins user's home directory. To find the jenkins user's home directory on a POSIX-compatible system, first switch to that user using the following command: sudo su - jenkins Then print the home directory to the console using the following command: echo $HOME We will need to have a continuous integration server set up where we can configure the jobs that will execute our automated performance analyses. The example that follows will use the open source Jenkins CI server. Jenkins CI is too large a subject to introduce here, but this article does not assume any working knowledge of it. For information about Jenkins CI, including basic installation or usage instructions, or to obtain a copy for your platform, visit the project website at http://jenkins-ci.org/. Our article uses version 1.552. The combination of PhantomJS and YSlow is in no way unique to Jenkins CI. The example aims to provide a clear illustration of automated performance testing that can easily be adapted to any number of continuous integration server environments. The article also uses several plugins on Jenkins CI to help facilitate our automated testing. These plugins include: Environment Injector Plugin JUnit Attachments Plugin TAP Plugin xUnit Plugin To run that demo site, we must have Node.js installed. In a separate terminal, change to the phantomjs-sandbox directory (in the sample code's directory), and start the app with the following command: node app.js How to do it… To execute our automated performance analyses in Jenkins CI, the first thing that we need to do is set up the job as follows: Select the New Item link in Jenkins CI. Give the new job a name (for example, YSlow Performance Analysis), select Build a free-style software project, and then click on OK. To ensure that the performance analyses are automated, we enter a Build Trigger for the job. Check off the appropriate Build Trigger and enter details about it. For example, to run the tests every two hours, during business hours, Monday through Friday, check Build periodically and enter the Schedule as H 9-16/2 * * 1-5. In the Build block, click on Add build step and then click on Execute shell. In the Command text area of the Execute Shell block, enter the shell commands that we would normally type at the command line, for example: phantomjs ${HOME}/tmp/yslow.js -i grade -threshold "B" -f junit http ://localhost:3000/css-demo > yslow.xml In the Post-build Actions block, click on Add post-build action and then click on Publish JUnit test result report. In the Test report XMLs field of the Publish JUnit Test Result Report block, enter *.xml. Lastly, click on Save to persist the changes to this job. Our performance analysis job should now run automatically according to the specified schedule; however, we can always trigger it manually by navigating to the job in Jenkins CI and clicking on Build Now. After a few of the performance analyses have completed, we can navigate to those jobs in Jenkins CI and see the results shown in the following screenshots: The landing page for a performance analysis project in Jenkins CI Note the Test Result Trend graph with the successes and failures. The Test Result report page for a specific build Note that the failed tests in the overall analysis are called out and that we can expand specific items to view their details. The All Tests view of the Test Result report page for a specific build Note that all tests in the performance analysis are listed here, regardless of whether they passed or failed, and that we can click into a specific test to view its details. How it works… The driving principle behind this article is that we want our continuous integration server to periodically and automatically execute the YSlow analyses for us so that we can monitor our website's performance over time. This way, we can see whether our changes are having an effect on overall site performance, receive alerts when performance declines, or even fail builds if we fall below our performance threshold. The first thing that we do in this article is set up the build job. In our example, we set up a new job that was dedicated to the YSlow performance analysis task. However, these steps could be adapted such that the performance analysis task is added onto an existing multipurpose job. Next, we configured when our job will run, adding Build Trigger to run the analyses according to a schedule. For our schedule, we selected H 9-16/2 * * 1-5, which runs the analyses every two hours, during business hours, on weekdays. While the schedule that we used is fine for demonstration purposes, we should carefully consider the needs of our project—chances are that a different Build Trigger will be more appropriate. For example, it may make more sense to select Build after other projects are built, and to have the performance analyses run only after the new code has been committed, built, and deployed to the appropriate QA or staging environment. Another alternative would be to select Poll SCM and to have the performance analyses run only after Jenkins CI detects new changes in source control. With the schedule configured, we can apply the shell commands necessary for the performance analyses. As noted earlier, the Command text area accepts the text that we would normally type on the command line. Here we type the following: phantomjs: This is for the PhantomJS executable binary ${HOME}/tmp/yslow.js: This is to refer to the copy of the YSlow library accessible to the Jenkins CI user -i grade: This is to indicate that we want the "Grade" level of report detail -threshold "B": This is to indicate that we want to fail builds with an overall grade of "B" or below -f junit: This is to indicate that we want the results output in the JUnit format http://localhost:3000/css-demo: This is typed in as our target URL > yslow.xml: This is to redirect the JUnit-formatted output to that file on the disk What if PhantomJS isn't on the PATH for the Jenkins CI user? A relatively common problem that we may experience is that, although we have permission on Jenkins CI to set up new build jobs, we are not the server administrator. It is likely that PhantomJS is available on the same machine where Jenkins CI is running, but the jenkins user simply does not have the phantomjs binary on its PATH. In these cases, we should work with the person administering the Jenkins CI server to learn its path. Once we have the PhantomJS path, we can do the following: click on Add build step and then on Inject environment variables; drag-and-drop the Inject environment variables block to ensure that it is above our Execute shell block; in the Properties Content text area, apply the PhantomJS binary's path to the PATH variable, as we would in any other script as follows: PATH=/path/to/phantomjs/bin:${PATH} After setting the shell commands to execute, we jump into the Post-build Actions block and instruct Jenkins CI where it can find the JUnit XML reports. As our shell command is redirecting the output into a file that is directly in the workspace, it is sufficient to enter an unqualified *.xml here. Once we have saved our build job in Jenkins CI, the performance analyses can begin right away! If we are impatient for our first round of results, we can click on Build Now for our job and watch as it executes the initial performance analysis. As the performance analyses are run, Jenkins CI will accumulate the results on the filesystem, keeping them until they are either manually removed or until a discard policy removes old build information. We can browse these accumulated jobs in the web UI for Jenkins CI, clicking on the Test Result link to drill into them. There's more… The first thing that bears expanding upon is that we should be thoughtful about what we use as the target URL for our performance analysis job. The YSlow library expects a single target URL, and as such, it is not prepared to handle a performance analysis job that is otherwise configured to target two or more URLs. As such, we must select a strategy to compensate for this, for example: Pick a representative page: We could manually go through our site and select the single page that we feel best represents the site as a whole. For example, we could pick the page that is "most average" compared to the other pages ("most will perform at about this level"), or the page that is most likely to be the "worst performing" page ("most pages will perform better than this"). With our representative page selected, we can then extrapolate performance for other pages from this specimen. Pick a critical page: We could manually select the single page that is most sensitive to performance. For example, we could pick our site's landing page (for example, "it is critical to optimize performance for first-time visitors"), or a product demo page (for example, "this is where conversions happen, so this is where performance needs to be best"). Again, with our performance-sensitive page selected, we can optimize the general cases around the specific one. Set up multiple performance analysis jobs: If we are not content to extrapolate site performance from a single specimen page, then we could set up multiple performance analysis jobs—one for each page on the site that we want to test. In this way, we could (conceivably) set up an exhaustive performance analysis suite. Unfortunately, the results will not roll up into one; however, once our site is properly tuned, we need to only look for the telltale red ball of a failed build in Jenkins CI. The second point worth considering is—where do we point PhantomJS and YSlow for the performance analysis? And how does the target URL's environment affect our interpretation of the results? If we are comfortable running our performance analysis against our production deploys, then there is not much else to discuss—we are assessing exactly what needs to be assessed. But if we are analyzing performance in production, then it's already too late—the slow code has already been deployed! If we have a QA or staging environment available to us, then this is potentially better; we can deploy new code to one of these environments for integration and performance testing before putting it in front of the customers. However, these environments are likely to be different from production despite our best efforts. For example, though we may be "doing everything else right", perhaps our staging server causes all traffic to come back from a single hostname, and thus, we cannot properly mimic a CDN, nor can we use cookie-free domains. Do we lower our threshold grade? Do we deactivate or ignore these rules? How can we tell apart the false negatives from the real warnings? We should put some careful thought into this—but don't be disheartened—better to have results that are slightly off than to have no results at all! Using TAP format If JUnit formatted results turn out to be unacceptable, there is also a TAP plugin for Jenkins CI. Test Anything Protocol (TAP) is a plain text-based report format that is relatively easy for both humans and machines to read. With the TAP plugin installed in Jenkins CI, we can easily configure our performance analysis job to use it. We would just make the following changes to our build job: In the Command text area of our Execute shell block, we would enter the following command: phantomjs ${HOME}/tmp/yslow.js -i grade -threshold "B" -f tap http ://localhost:3000/css-demo > yslow.tap In the Post-build Actions block, we would select Publish TAP Results instead of Publish JUnit test result report and enter yslow.tap in the Test results text field. Everything else about using TAP instead of JUnit-formatted results here is basically the same. The job will still run on the schedule we specify, Jenkins CI will still accumulate test results for comparison, and we can still explore the details of an individual test's outcomes. The TAP plugin adds an additional link in the job for us, TAP Extended Test Results, as shown in the following screenshot: One thing worth pointing out about using TAP results is that it is much easier to set up a single job to test multiple target URLs within a single website. We can enter multiple tests in the Execute Shell block (separating them with the && operator) and then set our Test Results target to be *.tap. This will conveniently combine the results of all our performance analyses into one. Summary In this article, we saw setting up of an automated performance analysis task on a continuous integration server (for example, Jenkins CI) using PhantomJS and the YSlow library. Resources for Article: Further resources on this subject: Getting Started [article] Introducing a feature of IntroJs [article] So, what is Node.js? [article]
Read more
  • 0
  • 0
  • 5514

article-image-trapping-errors-using-built-objects-javascript-testing
Packt
25 Aug 2010
6 min read
Save for later

Trapping Errors by Using Built-In Objects in JavaScript Testing

Packt
25 Aug 2010
6 min read
(For more resources on JavaScript, see here.) The Error object An Error is a generic exception, and it accepts an optional message that provides details of the exception. We can use the Error object by using the following syntax: new Error(message); // message can be a string or an integer Here's an example that shows the Error object in action. The source code for this example can be found in the file error-object.html. <html><head><script type="text/javascript">function factorial(x) { if(x == 0) { return 1; } else { return x * factorial(x-1); } } try { var a = prompt("Please enter a positive integer", ""); if(a < 0){ var error = new Error(1); alert(error.message); alert(error.name); throw error; } else if(isNaN(a)){ var error = new Error("it must be a number"); alert(error.message); alert(error.name); throw error; } var f = factorial(a); alert(a + "! = " + f); } catch (error) { if(error.message == 1) { alert("value cannot be negative"); } else if(error.message == "it must be a number") { alert("value must be a number"); } else throw error; } finally { alert("ok, all is done!"); } </script> </head> <body> </body> </html> You may have noticed that the structure of this code is similar to the previous examples, in which we demonstrated try, catch, finally, and throw. In this example, we have made use of what we have learned, and instead of throwing the error directly, we have used the Error object. I need you to focus on the code given above. Notice that we have used an integer and a string as the message argument for var error, namely new Error(1) and new Error("it must be a number"). Take note that we can make use of alert() to create a pop-up window to inform the user of the error that has occurred and the name of the error, which is Error, as it is an Error object. Similarly, we can make use of the message property to create program logic for the appropriate error message. It is important to see how the Error object works, as the following built-in objects, which we are going to learn about, work similarly to how we have seen for the Error object. (We might be able to show how we can use these errors in the console log.) The RangeError object A RangeError occurs when a number is out of its appropriate range. The syntax is similar to what we have seen for the Error object. Here's the syntax for RangeError: new RangeError(message); message can either be a string or an integer. <html><head><script type="text/javascript">try { var anArray = new Array(-1); // an array length must be positive}catch (error) { alert(error.message); alert(error.name);}finally { alert("ok, all is done!");}</script></head><body></body></html> We'll start with a simple example to show how this works. Check out the following code that can be found in the source code folder, in the file rangeerror.html: When you run this example, you should see an alert window informing you that the array is of an invalid length. After this alert window, you should receive another alert window telling you that The error is RangeError, as this is a RangeError object. If you look at the code carefully, you will see that I have deliberately created this error by giving a negative value to the array's length (array's length must be positive). The ReferenceError object A ReferenceError occurs when a variable, object, function, or array that you have referenced does not exist. The syntax is similar to what you have seen so far and it is as follows: new ReferenceError(message); message can either be a string or an integer. As this is pretty straightforward, I'll dive right into the next example. The code for the following example can be found in the source code folder, in the file referenceerror.html. <html><head><script type="text/javascript">try { x = y; // notice that y is not defined // an array length must be positive}catch (error) { alert(error); alert(error.message); alert(error.name);}finally { alert("ok, all is done!");}</script></head><body></body></html> Take note that y is not defined, and we are expecting to catch this error in the catch block. Now try the previous example in your Firefox browser. You should receive four alert windows regarding the errors, with each window giving you a different message. The messages are as follows: ReferenceError: y is not defined y is not defined ReferenceError ok, all is done If you are using Internet Explorer, you will receive slightly different messages. You will see the following messages: [object Error] message y is undefined TypeError ok, all is done The TypeError object A TypeError is thrown when we try to access a value that is of the wrong type. The syntax is as follows: new TypeError(message); // message can be a string or an integer and it is optional An example of TypeError is as follows: <html><head><script type="text/javascript">try { y = 1 var test = function weird() { var foo = "weird string"; } y = test.foo(); // foo is not a function}catch (error) { alert(error); alert(error.message); alert(error.name);}finally { alert("ok, all is done!");}</script></head><body></body></html> If you try running this code in Firefox, you should receive an alert box stating that it is a TypeError. This is because test.foo() is not a function, and this results in a TypeError. JavaScript is capable of finding out what kind of error has been caught. Similarly, you can use the traditional method of throwing your own TypeError(), by uncommenting the code. The following built-in objects are less used, so we'll just move through quickly with the syntax of the built-in objects.
Read more
  • 0
  • 0
  • 5505

article-image-getting-started-ext-gwt
Packt
28 Dec 2010
8 min read
Save for later

Getting Started with Ext GWT

Packt
28 Dec 2010
8 min read
  Ext GWT 2.0: Beginner's Guide Take the user experience of your website to a new level with Ext GWT Explore the full range of features of the Ext GWT library through practical, step-by-step examples Discover how to combine simple building blocks into powerful components Create powerful Rich Internet Applications with features normally only found in desktop applications Learn how to structure applications using MVC for maximum reliability and maintainability      What is GWT missing? GWT is a toolkit as opposed to a full development framework, and for most projects, it forms the part of a solution rather than the whole solution. Out-of-the-box GWT comes with only a basic set of widgets and lacks a framework to enable the developers to structure larger applications. Fortunately, GWT is both open and extensible and as a result, a range of complementary projects have grown up around it. Ext GWT is one of those projects. What does Ext GWT offer? Ext GWT sets out to build upon the strengths of GWT by enabling the developers to give their users an experience more akin to that of a desktop application. Ext GWT provides the GWT developer with a comprehensive component library similar to that used when developing for desktop environments. In addition to being a component library, powerful features for working with local and remote data are provided. It also features a model view controller framework, which can be used to structure larger applications. How is Ext GWT licensed? Licensing is always an important consideration when choosing technology to use in a project. At the time of writing, Ext GWT is offered with a dual license. The first license is an open source license compatible with the GNU GPL license v3. If you wish to use this license, you do not have to pay a fee for using Ext GWT, but in return you have to make your source code available under an open source license. This means you have to contribute all the source code of your project to the open source community and give everyone the right to modify or redistribute it. If you cannot meet the obligations of the open source license, for example, you are producing a commercial product or simply do not want to share your source code, you have to purchase a commercial license for Ext GWT. It is a good idea to check the current licensing requirements on the Sencha website, http://www.sencha.com, and take that into account when planning your project. Alternatives to Ext GWT Ext GWT is one of the many products produced by the company Sencha. Sencha was previously named Ext JS and started off developing a JavaScript library by the same name. Ext GWT is closely related to the Ext JS product in terms of functionality. Both Ext GWT and Ext JS also share the same look and feel as well as a similar API structure. However, Ext GWT is a native GWT implementation, written almost entirely in Java rather than a wrapper, the JavaScript-based Ext JS. GWT-Ext Before Ext GWT, there was GWT-Ext: http://code.google.com/p/gwt-ext/. This library was developed by Sanjiv Jeevan as a GWT wrapper around an earlier, 2.0.2 version of Ext JS. Being based on Ext JS, it has a very similar look and feel to Ext GWT. However, after the license of Ext JS changed from LGPL to GPL in 2008, active development came to an end. Apart from no longer being developed or supported, developing with GWT-Ext is more difficult than with Ext GWT. This is because the library is a wrapper around JavaScript and the Java debugger cannot help when there is a problem in the JavaScript code. Manual debugging is required. Smart GWT When development of GWT-Ext came to an end, Sanjiv Jeevan started a new project named Smart GWT: http://www.smartclient.com/smartgwt/. This is a LGPL framework that wraps the Smart Client JavaScript library in a similar way that GWT-Ext wraps Ext JS. Smart GWT has the advantage that it is still being actively developed. Being LGPL-licensed, it also can be used commercially without the need to pay the license fee that is required for Ext GWT. Smart GWT still has the debugging problems of GWT-Ext and the components are often regarded not as visually pleasing as Ext GWT. This could be down to personal taste of course. Vaadin Vaadin, http://vaadin.com, is a third alternative to Ext GWT. Vaadin is a server-side framework that uses a set of precompiled GWT components. Although you can write your own components if required, Vaadin is really designed so that you can build applications by combining the ready-made components. In Vaadin the browser client is just a dumb view of the server components and any user interaction is sent to the server for processing much like traditional Java web frameworks. This can be slow depending on the speed of the connection between the client and the server. The main disadvantage of Vaadin is the dependency on the server. GWT or Ext GWT's JavaScript can run in a browser without needing to communicate with a server. This is not possible in Vaadin. Ext GWT or GXT? To avoid confusion with GWT-Ext and to make it easier to write, Ext GWT is commonly abbreviated to GXT. We will use GXT synonymously with Ext GWT throughout the rest of this article. Working with GXT: A different type of web development If you are a web developer coming to GXT or GWT for the first time, it is very important to realize that working with this toolset is not like traditional web development. In traditional web development, most of the work is done on the server and the part the browser plays is li?? le more than a view-making request and receiving responses. When using GWT, especially GXT, at times it is easier if you suspend your web development thinking and think more like a desktop-rich client developer. Java Swing developers, for example, may find themselves at home. How GXT fits into GWT GXT is simply a library that plugs into any GWT project. If we have an existing GWT project setup, all we need to do to use it is: Download the GXT SDK from the Sencha website Add the library to the project and reference it in the GWT configuration Copy a set of resource files to the project If you haven't got a GWT project setup, don't worry. We will now work through getting GXT running from the beginning. Downloading what you need Before we can start working with GXT, we first need to download the toolkit and set up our development environment. Here is the list of what you need to download for running the examples.     Recommended Notes Download from Sun JDK 6 The Java development kit http://java.sun.com/javase/downloads/widget/jdk6.jsp Eclipse IDE for Java EE Developers 3.6 The Eclipse IDE for Java developers, which also includes some useful web development tools http://www.eclipse.org/downloads/ Ext GWT 2.2.0 SDK for GWT 2.0 The GXT SDK itself http://www.sencha.com/products/gwt/download.php Google supplies a useful plugin that integrates GWT into Eclipse. However, there is no reason that you cannot use an alternative development environment, if you prefer. Eclipse setup There are different versions of Eclipse, and although Eclipse for Java EE developers is not strictly required, it contains some useful tools for editing web-specific files such as CSS. These tools will be useful for GXT development, so it is strongly recommended. We will not cover the details of installing Eclipse here, as this is covered more than adequately on the Eclipse website. For that reason, we make the assumption that you already have a fresh installation of Eclipse ready to go. GWT setup You may have noticed that GWT is not included in the list of downloads. This is because since version 2.0.0, GWT has been available within an Eclipse plugin, which we will now set up. Time for action – setting up GWT In Eclipse, select Help Install New Software|. The installation dialog will appear. Click on the Add button to add a new site. Enter the name and location in the respective fields, as shown in the following screenshot, and click on the OK button. Move the mouse over the image to enlarge it. Select Google Plugin for Eclipse from the plugin section and Google Web Toolkit SDK from the SDKs section. Click on Next. The following dialog will appear. Click on Next to proceed. Click on the radio button to accept the license. Click on Finish. Eclipse will now download the Google Web Toolkit and configure the plugin. Restart when prompted. On restarting, if GWT and the Google Eclipse Plugin are installed successfully, you will notice the following three new icons in your toolbar. What just happened? You have now set up GWT in your Eclipse IDE. You are now ready to create GWT applications. However, before we can create GXT applications, there is a bit more work to do.
Read more
  • 0
  • 0
  • 5482

article-image-handling-long-running-requests-play
Packt
22 Sep 2014
18 min read
Save for later

Handling Long-running Requests in Play

Packt
22 Sep 2014
18 min read
In this article by Julien Richard-Foy, author of Play Framework Essentials, we will dive in the framework internals and explain how to leverage its reactive programming model to manipulate data streams. (For more resources related to this topic, see here.) Firstly, I would like to mention that the code called by controllers must be thread-safe. We also noticed that the result of calling an action has type Future[Result] rather than just Result. This article explains these subtleties and gives answers to questions such as "How are concurrent requests processed by Play applications?" More precisely, this article presents the challenges of stream processing and the way the Play framework solves them. You will learn how to consume, produce, and transform data streams in a non-blocking way using the Iteratee library. Then, you will leverage these skills to stream results and push real-time notifications to your clients. By the end of the article, you will be able to do the following: Produce, consume, and transform streams of data Process a large request body chunk by chunk Serve HTTP chunked responses Push real-time notifications using WebSockets or server-sent events Manage the execution context of your code Play application's execution model The streaming programming model provided by Play has been influenced by the execution model of Play applications, which itself has been influenced by the nature of the work a web application performs. So, let's start from the beginning: what does a web application do? For now, our example application does the following: the HTTP layer invokes some business logic via the service layer, and the service layer does some computations by itself and also calls the database layer. It is worth noting that in our configuration, the database system runs on the same machine as the web application but this is, however, not a requirement. In fact, there are chances that in real-world projects, your database system is decoupled from your HTTP layer and that both run on different machines. It means that while a query is executed on the database, the web layer does nothing but wait for the response. Actually, the HTTP layer is often waiting for some response coming from another system; it could, for example, retrieve some data from an external web service, or the business layer itself could be located on a remote machine. Decoupling the HTTP layer from the business layer or the persistence layer gives a finer control on how to scale the system (more details about that are given further in this article). Anyway, the point is that the HTTP layer may essentially spend time waiting. With that in mind, consider the following diagram showing how concurrent requests could be executed by a web application using a threaded execution model. That is, a model where each request is processed in its own thread.  Threaded execution model Several clients (shown on the left-hand side in the preceding diagram) perform queries that are processed by the application's controller. On the right-hand side of the controller, the figure shows an execution thread corresponding to each action's execution. The filled rectangles represent the time spent performing computations within a thread (for example, for processing data or computing a result), and the lines represent the time waiting for some remote data. Each action's execution is distinguished by a particular color. In this fictive example, the action handling the first request may execute a query to a remote database, hence the line (illustrating that the thread waits for the database result) between the two pink rectangles (illustrating that the action performs some computation before querying the database and after getting the database result). The action handling the third request may perform a call to a distant web service and then a second one, after the response of the first one has been received; hence, the two lines between the green rectangles. And the action handling the last request may perform a call to a distant web service that streams a response of an infinite size, hence, the multiple lines between the purple rectangles. The problem with this execution model is that each request requires the creation of a new thread. Threads have an overhead at creation, because they consume memory (essentially because each thread has its own stack), and during execution, when the scheduler switches contexts. However, we can see that these threads spend a lot of time just waiting. If we could use the same thread to process another request while the current action is waiting for something, we could avoid the creation of threads, and thus save resources. This is exactly what the execution model used by Play—the evented execution model—does, as depicted in the following diagram: Evented execution model Here, the computation fragments are executed on two threads only. Note that the same action can have its computation fragments run by different threads (for example, the pink action). Also note that several threads are still in use, that's why the code must be thread-safe. The time spent waiting between computing things is the same as before, and you can see that the time required to completely process a request is about the same as with the threaded model (for instance, the second pink rectangle ends at the same position as in the earlier figure, same for the third green rectangle, and so on). A comparison between the threaded and evented models can be found in the master's thesis of Benjamin Erb, Concurrent Programming for Scalable Web Architectures, 2012. An online version is available at http://berb.github.io/diploma-thesis/. An attentive reader may think that I have cheated; the rectangles in the second figure are often thinner than their equivalent in the first figure. That's because, in the first model, there is an overhead for scheduling threads and, above all, even if you have a lot of threads, your machine still has a limited number of cores effectively executing the code of your threads. More precisely, if you have more threads than your number of cores, you necessarily have threads in an idle state (that is, waiting). This means, if we suppose that the machine executing the application has only two cores, in the first figure, there is even time spent waiting in the rectangles! Scaling up your server The previous section raises the question of how to handle a higher number of concurrent requests, as depicted in the following diagram: A server under an increasing load The previous section explained how to avoid wasting resources to leverage the computing power of your server. But actually, there is no magic; if you want to compute even more things per unit of time, you need more computing power, as depicted in the following diagram: Scaling using more powerful hardware One solution could be to have a more powerful server. But you could be smarter than that and avoid buying expensive hardware by studying the shape of the workload and make appropriate decisions at the software-level. Indeed, there are chances that your workload varies a lot over time, with peaks and holes of activity. This information suggests that if you wanted to buy more powerful hardware, its performance characteristics would be drawn by your highest activity peak, even if it occurs very occasionally. Obviously, this solution is not optimal because you would buy expensive hardware even if you actually needed it only one percent of the time (and more powerful hardware often also means more power-consuming hardware). A better way to handle the workload elasticity consists of adding or removing server instances according to the activity level, as depicted in the following diagram: Scaling using several server instances This architecture design allows you to finely (and dynamically) tune your server capacity according to your workload. That's actually the cloud computing model. Nevertheless, this architecture has a major implication on your code; you cannot assume that subsequent requests issued by the same client will be handled by the same server instance. In practice, it means that you must treat each request independently of each other; you cannot for instance, store a counter on a server instance to count the number of requests issued by a client (your server would miss some requests if one is routed to another server instance). In a nutshell, your server has to be stateless. Fortunately, Play is stateless, so as long as you don't explicitly have a mutable state in your code, your application is stateless. Note that the first implementation I gave of the shop was not stateless; indeed the state of the application was stored in the server's memory. Embracing non-blocking APIs In the first section of this article, I claimed the superiority of the evented execution model over the threaded execution model, in the context of web servers. That being said, to be fair, the threaded model has an advantage over the evented model: it is simpler to program with. Indeed, in such a case, the framework is responsible for creating the threads and the JVM is responsible for scheduling the threads, so that you don't even have to think about this at all, yet your code is concurrently executed. On the other hand, with the evented model, concurrency control is explicit and you should care about it. Indeed, the fact that the same execution thread is used to run several concurrent actions has an important implication on your code: it should not block the thread. Indeed, while the code of an action is executed, no other action code can be concurrently executed on the same thread. What does blocking mean? It means holding a thread for too long a duration. It typically happens when you perform a heavy computation or wait for a remote response. However, we saw that these cases, especially waiting for remote responses, are very common in web servers, so how should you handle them? You have to wait in a non-blocking way or implement your heavy computations as incremental computations. In all the cases, you have to break down your code into computation fragments, where the execution is managed by the execution context. In the diagram illustrating the evented execution model, computation fragments are materialized by the rectangles. You can see that rectangles of different colors are interleaved; you can find rectangles of another color between two rectangles of the same color. However, by default, the code you write forms a single block of execution instead of several computation fragments. It means that, by default, your code is executed sequentially; the rectangles are not interleaved! This is depicted in the following diagram: Evented execution model running blocking code The previous figure still shows both the execution threads. The second one handles the blue action and then the purple infinite action, so that all the other actions can only be handled by the first execution context. This figure illustrates the fact that while the evented model can potentially be more efficient than the threaded model, it can also have negative consequences on the performances of your application: infinite actions block an execution thread forever and the sequential execution of actions can lead to much longer response times. So, how can you break down your code into blocks that can be managed by an execution context? In Scala, you can do so by wrapping your code in a Future block: Future { // This is a computation fragment} The Future API comes from the standard Scala library. For Java users, Play provides a convenient wrapper named play.libs.F.Promise: Promise.promise(() -> {// This is a computation fragment}); Such a block is a value of type Future[A] or, in Java, Promise<A> (where A is the type of the value computed by the block). We say that these blocks are asynchronous because they break the execution flow; you have no guarantee that the block will be sequentially executed before the following statement. When the block is effectively evaluated depends on the execution context implementation that manages it. The role of an execution context is to schedule the execution of computation fragments. In the figure showing the evented model, the execution context consists of a thread pool containing two threads (represented by the two lines under the rectangles). Actually, each time you create an asynchronous value, you have to supply the execution context that will manage its evaluation. In Scala, this is usually achieved using an implicit parameter of type ExecutionContext. You can, for instance, use an execution context provided by Play that consists, by default, of a thread pool with one thread per processor: import play.api.libs.concurrent.Execution.Implicits.defaultContext In Java, this execution context is automatically used by default, but you can explicitly supply another one: Promise.promise(() -> { ... }, myExecutionContext); Now that you know how to create asynchronous values, you need to know how to manipulate them. For instance, a sequence of several Future blocks is concurrently executed; how do we define an asynchronous computation depending on another one? You can eventually schedule a computation after an asynchronous value has been resolved using the foreach method: val futureX = Future { 42 }futureX.foreach(x => println(x)) In Java, you can perform the same operation using the onRedeem method: Promise<Integer> futureX = Promise.promise(() -> 42);futureX.onRedeem((x) -> System.out.println(x)); More interestingly, you can eventually transform an asynchronous value using the map method: val futureIsEven = futureX.map(x => x % 2 == 0) The map method exists in Java too: Promise<Boolean> futureIsEven = futureX.map((x) -> x % 2 == 0); If the function you use to transform an asynchronous value returned an asynchronous value too, you would end up with an inconvenient Future[Future[A]] value (or a Promise<Promise<A>> value, in Java). So, use the flatMap method in that case: val futureIsEven = futureX.flatMap(x => Future { x % 2 == 0 }) The flatMap method is also available in Java: Promise<Boolean> futureIsEven = futureX.flatMap((x) -> {Promise.promise(() -> x % 2 == 0)}); The foreach, map, and flatMap functions (or their Java equivalent) all have in common to set a dependency between two asynchronous values; the computation they take as the parameter is always evaluated after the asynchronous computation they are applied to. Another method that is worth mentioning is zip: val futureXY: Future[(Int, Int)] = futureX.zip(futureY) The zip method is also available in Java: Promise<Tuple<Integer, Integer>> futureXY = futureX.zip(futureY); The zip method returns an asynchronous value eventually resolved to a tuple containing the two resolved asynchronous values. It can be thought of as a way to join two asynchronous values without specifying any execution order between them. If you want to join more than two asynchronous values, you can use the zip method several times (for example, futureX.zip(futureY).zip(futureZ).zip(…)), but an alternative is to use the Future.sequence function: val futureXs: Future[Seq[Int]] =Future.sequence(Seq(futureX, futureY, futureZ, …)) This function transforms a sequence of future values into a future sequence value. In Java, this function is named Promise.sequence. In the preceding descriptions, I always used the word eventually, and it has a reason. Indeed, if we use an asynchronous value to manipulate a result sent by a remote machine (such as a database system or a web service), the communication may eventually fail due to some technical issue (for example, if the network is down). For this reason, asynchronous values have error recovery methods; for example, the recover method: futureX.recover { case NonFatal(e) => y } The recover method is also available in Java: futureX.recover((throwable) -> y); The previous code resolves futureX to the value of y in the case of an error. Libraries performing remote calls (such as an HTTP client or a database client) return such asynchronous values when they are implemented in a non-blocking way. You should always be careful whether the libraries you use are blocking or not and keep in mind that, by default, Play is tuned to be efficient with non-blocking APIs. It is worth noting that JDBC is blocking. It means that the majority of Java-based libraries for database communication are blocking. Obviously, once you get a value of type Future[A] (or Promise<A>, in Java), there is no way to get the A value unless you wait (and block) for the value to be resolved. We saw that the map and flatMap methods make it possible to manipulate the future A value, but you still end up with a Future[SomethingElse] value (or a Promise<SomethingElse>, in Java). It means that if your action's code calls an asynchronous API, it will end up with a Future[Result] value rather than a Result value. In that case, you have to use Action.async instead of Action, as illustrated in this typical code example: val asynchronousAction = Action.async { implicit request =>  service.asynchronousComputation().map(result => Ok(result))} In Java, there is nothing special to do; simply make your method return a Promise<Result> object: public static Promise<Result> asynchronousAction() { service.asynchronousComputation().map((result) -> ok(result));} Managing execution contexts Because Play uses explicit concurrency control, controllers are also responsible for using the right execution context to run their action's code. Generally, as long as your actions do not invoke heavy computations or blocking APIs, the default execution context should work fine. However, if your code is blocking, it is recommended to use a distinct execution context to run it. An application with two execution contexts (represented by the black and grey arrows). You can specify in which execution context each action should be executed, as explained in this section Unfortunately, there is no non-blocking standard API for relational database communication (JDBC is blocking). It means that all our actions that invoke code executing database queries should be run in a distinct execution context so that the default execution context is not blocked. This distinct execution context has to be configured according to your needs. In the case of JDBC communication, your execution context should be a thread pool with as many threads as your maximum number of connections. The following diagram illustrates such a configuration: This preceding diagram shows two execution contexts, each with two threads. The execution context at the top of the figure runs database code, while the default execution context (on the bottom) handles the remaining (non-blocking) actions. In practice, it is convenient to use Akka to define your execution contexts as they are easily configurable. Akka is a library used for building concurrent, distributed, and resilient event-driven applications. This article assumes that you have some knowledge of Akka; if that is not the case, do some research on it. Play integrates Akka and manages an actor system that follows your application's life cycle (that is, it is started and shut down with the application). For more information on Akka, visit http://akka.io. Here is how you can create an execution context with a thread pool of 10 threads, in your application.conf file: jdbc-execution-context {thread-pool-executor {   core-pool-size-factor = 10.0   core-pool-size-max = 10}} You can use it as follows in your code: import play.api.libs.concurrent.Akkaimport play.api.Play.currentimplicit val jdbc =  Akka.system.dispatchers.lookup("jdbc-execution-context") The Akka.system expression retrieves the actor system managed by Play. Then, the execution context is retrieved using Akka's API. The equivalent Java code is the following: import play.libs.Akka;import akka.dispatch.MessageDispatcher;import play.core.j.HttpExecutionContext;MessageDispatcher jdbc =   Akka.system().dispatchers().lookup("jdbc-execution-context"); Note that controllers retrieve the current request's information from a thread-local static variable, so you have to attach it to the execution context's thread before using it from a controller's action: play.core.j.HttpExecutionContext.fromThread(jdbc) Finally, forcing the use of a specific execution context for a given action can be achieved as follows (provided that my.execution.context is an implicit execution context): import my.execution.contextval myAction = Action.async {Future { … }} The Java equivalent code is as follows: public static Promise<Result> myAction() {return Promise.promise(   () -> { … },   HttpExecutionContext.fromThread(myExecutionContext));} Does this feels like clumsy code? Buy the book to learn how to reduce the boilerplate! Summary This article detailed a lot of things on the internals of the framework. You now know that Play uses an evented execution model to process requests and serve responses and that it implies that your code should not block the execution thread. You know how to use future blocks and promises to define computation fragments that can be concurrently managed by Play's execution context and how to define your own execution context with a different threading policy, for example, if you are constrained to use a blocking API. Resources for Article: Further resources on this subject: Play! Framework 2 – Dealing with Content [article] So, what is Play? [article] Play Framework: Introduction to Writing Modules [article]
Read more
  • 0
  • 0
  • 5480
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-so-what-apache-wicket
Packt
04 Sep 2013
7 min read
Save for later

So, what is Apache Wicket?

Packt
04 Sep 2013
7 min read
(For more resources related to this topic, see here.) Wicket is a component-based Java web framework that uses just Java and HTML. Here, you will see the main advantages of using Apache Wicket in your projects. Using Wicket, you will not have mutant HTML pages. Most of the Java web frameworks require the insertion of special syntax to the HTML code, making it more difficult for Web designers. On the other hand, Wicket adopts HTML templates by using a namespace that follows the XHTML standard. It consists of an id attribute in the Wicket namespace (wicket:id). You won't need scripts to generate messy HTML code. Using Wicket, the code will be clearer, and refactoring and navigating within the code will be easier. Moreover, you can utilize any HTML editor to edit the HTML files, and web designers can work with little knowledge of Wicket in the presentation layer without worrying about business rules and other developer concerns. The advantages for developers are as follows: All code is written in Java No XML configuration files POJO-centric programming No Back-button problems (that is, unexpected and undesirable results on clicking on the browser's Back button) Ease of creating bookmarkable pages Great compile-time and runtime problem diagnosis Easy testability of components Another interesting thing is that concepts such as generics and anonymous subclasses are widely used in Wicket, leveraging the Java programming language to the max. Wicket is based on components. A component is an object that interacts with other components and encapsulates a set of functionalities. Each component should be reusable, replaceable, extensible, encapsulated, and independent, and it does not have a specific context. Wicket provides all these principles to developers because it has been designed taking into account all of them. In particular, the most remarkable principle is reusability. Developers can create custom reusable components in a straightforward way. For instance, you could create a custom component called SearchPanel (by extending the Panel class, which is also a component) and use it in all your other Wicket projects. Wicket has many other interesting features. Wicket also aims to make the interaction of the stateful server-side Java programming language with the stateless HTTP protocol more natural. Wicket's code is safe by default. For instance, it does not encode state in URLs. Wicket is also efficient (for example, it is possible to do a tuning of page-state replication) and scalable (Wicket applications can easily work on a cluster). Last, but not least, Wicket has support for frameworks like EJB and Spring. Installation In seven easy steps, you can build a Wicket "Hello World" application. Step 1 – what do I need? Before you start to use Apache Wicket 6, you will need to check if you have all of the required elements, listed as follows: Wicket is a Java framework, so you need to have Java virtual machine (at least Version 6) installed on your machine. Apache Maven is required. Maven is a tool that can be used for building and managing Java projects. Its main purpose is to make the development process easier and more structured. More information on how to install and configure Maven can be found at http://maven.apache.org. The examples of this book use the Eclipse IDE Juno version, but you can also use other versions or other IDEs, such as NetBeans. In case you are using other versions, check the link for installing the plugins to the version you have; the remaining steps will be the same. In case of other IDEs, you will need to follow some tutorial to install other equivalent plugins or not use them at all. Step 2 – installing the m2eclipse plugin The steps for installing the m2eclipse plugin are as follows: Go to Help | Install New Software. Click on Add and type in m2eclipse in the Name field; copy and paste the link https://repository.sonatype.org/content/repositories/forge-sites/m2e/1.3.0/N/LATEST onto the Location field. Check all options and click on Next. Conclude the installation of the m2eclipse plugin by accepting all agreements and clicking on Finish. Step 3 – creating a new Maven application The steps for creating a new Maven application are as follows: Go to File | New | Project. Then go to Maven | Maven Project. Click on Next and type wicket in the next form. Choose the wicket-archetype-quickstart maven Archetype and click on Next. Fill the next form according to the following screenshot and click on Finish: Step 4 – coding the "Hello World" program In this step, we will build the famous "Hello World" program. The separation of concerns will be clear between HTML and Java code. In this example, and in most cases, each HTML file has a corresponding Java class (with the same name). First, we will analyse the HTML template code. The content of the HomePage.html file must be replaced by the following code: <!DOCTYPE html> <html > <body> <span wicket_id="helloWorldMessage">Test</span> </body> </html> It is simple HTML code with the Wicket template wicket:id="helloWorldMessage". It indicates that in the Java code related to this page, a method will replace the message Test by another message. Now, let's edit the corresponding Java class; that is, HomePage. package com.packtpub.wicket.hello_world; import org.apache.wicket.markup.html.WebPage; import org.apache.wicket.markup.html.basic.Label; public class HomePage extends WebPage { public HomePage() { add(new Label("helloWorldMessage", "Hello world!!!")); } } The class HomePage extends WebPage; that is, it inherits some of the WebPage class's methods and attributes, and it becomes a WebPage subtype. One of these inherited methods is the method add(), where a Label object can be passed as a parameter. A Label object can be built by passing two parameters: an identifier and a string. The method add() is called in the HomePage class's constructor and will change the message in wicket:id="helloWorldMessage" with Hello world!!!. The resulting HTML code will be as shown in the following code snippet: <!DOCTYPE html> <html > <body> <span>Hello world!!!</span> </body> </html> Step 5 – compile and run! The steps to compile and run the project are as follows: To compile, right-click on the project and go to Run As | Maven install. Verify if the compilation was successful. If not, Wicket provides good error messages, so you can try to fix what is wrong. To run the project, right-click on the class Start and go to Run As | Java application. The class Start will run an embedded Jetty instance that will run the application. Verify if the server has started without any problems. Open a web browser and enter this in the address field: http://localhost:8080. In case you have changed the port, enter http://localhost:<port>. The browser should show Hello world!!!. The most common problem that can occur is that port 8080 is already in use. In this case, you can go into the Java Start class (found at src/test/java) and set another port by replacing 8080 in connector. setPort(8080) (line 21) by another number (for example, 9999). To stop the server, you can either click on Console and press any key or click on the red square on the console, which indicates termination. And that's it! By this point, you should have a working Wicket "Hello World" application and are free to play around and discover more about it. Summary This article describes how to create a simple "Hello World" application using Apache Wicket 6. Resources for Article : Further resources on this subject: Tips for Deploying Sakai [Article] OSGi life cycle [Article] Apache Wicket: Displaying Data Using DataTable [Article]
Read more
  • 0
  • 0
  • 5478

article-image-touch-events
Packt
26 Nov 2013
7 min read
Save for later

Touch Events

Packt
26 Nov 2013
7 min read
(For more resources related to this topic, see here.) The why and when of touch events These touch events come in a few different flavors. There are taps, swipes, rotations, pinches, and more complicated gestures available for you to make your users' experience more enjoyable. Each type of touch event has an established convention which is important to know, so your user has minimal friction when using your website for the first time: The tap is analogous to the mouse click in a traditional desktop environment, whereas, the rotate, pinch, and related gestures have no equivalence. This presents an interesting problem—what can those events be used for? Pinch events are typically used to scale views (have a look at the following screenshot of Google Maps accessed from an iPad, which uses pinch events to control the map's zoom level), while rotate events generally rotate elements on screen. Swipes can be used to move content from left to right or top to bottom. Multifigure gestures can be different as per your website's use cases, but it's still worthwhile examining if a similar interaction already exists, and thus an established convention. Web applications can benefit greatly from touch events. For example, imagine a paint-style web application. It could use pinches to control zoom, rotations to rotate the canvas, and swipes to move between color options. There is almost always an intuitive way of using touch events to enhance a web application. In order to get comfortable with touch events, it's best to get out there and start using them practically. To this end, we will be building a paint tool with a difference, that is, instead of using the traditional mouse pointer as the selection and drawing tool, we will exclusively use touch events. Together, we can learn just how awesome touch events are and the pleasure they can afford your users. Touch events are the way of the future, and adding them to your developer tool belt will mean that you can stay at the cutting edge. While MC Hammer may not agree with these events, the rest of the world will appreciate touching your applications. Testing while keeping your sanity (Simple) Touch events are awesome on touch-enabled devices. Unfortunately, your development environment is generally not touch-enabled, which means testing these events can be trickier than normal testing. Thankfully, there are a bunch of methods to test these events on non-touch enabled devices, so you don't have to constantly switch between environments. In saying that though, it's still important to do the final testing on actual devices, as there can be subtle differences. How to do it... Learn how to do initial testing with Google Chrome. Debug remotely with Mobile Safari and Safari. Debug further with simulators and emulators. Use general tools to debug your web application. How it works… As mentioned earlier, testing touch events generally isn't as straightforward as the normal update-reload-test flow in web development. Luckily though, the traditional desktop browser can still get you some of the way when it comes to testing these events. While you generally are targeting Mobile Safari on iOS and Chrome, or Browser on Android, it's best to do development and initial testing in Chrome on the desktop. Chrome has a handy feature in its debugging tools called Emulate touch events. This allows a lot of your mouse-related interactions to trigger their touch equivalents. You can find this setting by opening the Web Inspector, clicking on the Settings cog, switching to the Overrides tab, and enabling Emulate touch events (if the option is grayed out, simply click on Enable at the top of the screen). Using this emulation, we are able to test many touch interactions; however, there are certain patterns (such as scaling and rotating) that aren't really possible just using emulated events and the mouse pointer. To correctly test these, we need to find the middle ground between testing on a desktop browser and testing on actual devices. There's more… Remember the following about testing with simulators and browsers: Device simulators While testing with Chrome will get you some of the way, eventually you will want to use a simulator or emulator for the target device, be it a phone, tablet, or other touch-enabled device. Testing in Mobile Safari The iOS Simulator, for testing Mobile Safari on iPhone, iPod, and iPad, is available only on OS X. You must first download Xcode from the App Store. Once you have Xcode, go to Xcode | Preferences | Downloads | Components, where you can download the latest iOS Simulator. You can also refer to the following screenshot: You can then open the iOS Simulator from Xcode by accessing Xcode | Open Developer Tool | iOS Simulator. Alternatively, you can make a direct alias by accessing Xcode's package contents by accessing the Xcode icon's context menu in the Applications folder and selecting Show Package Contents. You can then access it at Xcode.app/Contents/Applications. Here you can create an alias, and drag it anywhere using Finder. Once you have the simulator (or an actual device); you can then attach Safari's Web Inspector to it, which makes debugging very easy. Firstly, open Safari and then launch Mobile Safari in either the iOS Simulator or on your actual device (ensure it's plugged in via its USB cable). Ensure that Web Inspector is enabled in Mobile Safari's settings via Settings | Safari | Advanced | Web Inspector. You can refer to the following screenshot: Then, in Safari, go to Develop | <your device name> | <tab name>. You can now use the Web Inspector to inspect your remote Mobile Safari instance as shown in the following screenshot: Testing in Mobile Safari in the iOS Simulator allows you to use the mouse to trigger all the relevant touch events. In addition, you can hold down the Alt key to test pinch gestures. Testing on Android's Chrome Testing on the Android on desktop is a little more complicated, due to the nature of the Android emulator. Being an emulator and not a simulator, its performance suffers (in the name of accuracy). It also isn't so straightforward to attach a debugger. You can download the Android SDK from http://developer.android.com/sdk. Inside the tools directory, you can launch the Android Emulator by setting up a new Android Virtual Device (AVD). You can then launch the browser to test the touch events. Keep in mind to use 10.0.2.2 instead of localhost to access local running servers. To debug with a real device and Google Chrome, official documentation can be followed at https://developers.google.com/chrome-developer-tools/docs/remote-debugging. When there isn't a built-in debugger or inspector, you can use generic tools such as Web Inspector Remote (Weinre). This tool is platform-agnostic, so you can use it for Mobile Safari, Android's Browser, Chrome for Android, Windows Phone, Blackberry, and so on. It can be downloaded from http://people.apache.org/~pmuellr/weinre. To start using Weinre, you need to include a JavaScript on the page you want to test, and then run the testing server. The URL of this JavaScript file is given to you when you run the weinre command from its downloaded directory. When you have Weinre running on your desktop, you can use its Web Inspector-like interface to do remote debugging and inspecting your touch events, you can send messages to console.log() and use a familiar interface for other debugging. Summary In this article the author has discussed about the touch events and has highlighted the limitations faced by such events when an unfavorable environment is encountered. He has also discussed the methods to test these events on non-touch enabled devices. Resources for Article: Further resources on this subject: Web Design Principles in Inkscape [Article] Top Features You Need to Know About – Responsive Web Design [Article] Sencha Touch: Layouts Revisited [Article]
Read more
  • 0
  • 0
  • 5477

article-image-data-binding-expression-blend-4-silverlight-4
Packt
13 May 2010
7 min read
Save for later

Data binding from Expression Blend 4 in Silverlight 4

Packt
13 May 2010
7 min read
Using the different modes of data binding to allow persisting data Until now, the data has flowed from the source to the target (the UI controls). However, it can also flow in the opposite direction, that is, from the target towards the source. This way, not only can data binding help us in displaying data, but also in persisting data. The direction of the flow of data in a data binding scenario is controlled by the Mode property of the Binding. In this recipe, we'll look at an example that uses all the Mode options and in one go, we'll push the data that we enter ourselves to the source. Getting ready This recipe builds on the code that was created in the previous recipes, so if you're following along, you can keep using that codebase. You can also follow this recipe from the provided start solution. It can be found in the Chapter02/SilverlightBanking_Binding_ Modes_Starter folder in the code bundle that is available on the Packt website. The Chapter02/SilverlightBanking_Binding_Modes_Completed folder contains the finished application of this recipe. How to do it... In this recipe, we'll build the "edit details" window of the Owner class. On this window, part of the data is editable, while some isn't. The editable data will be bound using a TwoWay binding, whereas the non-editable data is bound using a OneTime binding. The Current balance of the account is also shown—which uses the automatic synchronization—based on the INotifyPropertyChanged interface implementation. This is achieved using OneWay binding. The following is a screenshot of the details screen: Let's go through the required steps to work with the different binding modes: Add a new Silverlight child window called OwnerDetailsEdit.xaml to the Silverlight project. In the code-behind of this window, change the default constructor—so that it accepts an instance of the Owner class—as shown in the following code: private Owner owner; public OwnerDetailsEdit(Owner owner) { InitializeComponent(); this.owner = owner; } In MainPage.xaml, add a Click event on the OwnerDetailsEditButton: <Button x_Name="OwnerDetailsEditButton" Click="OwnerDetailsEditButton_Click" > In the event handler, add the following code, which will create a new instance of the OwnerDetailsEdit window, passing in the created Owner instance: private void OwnerDetailsEditButton_Click(object sender, RoutedEventArgs e) { OwnerDetailsEdit ownerDetailsEdit = new OwnerDetailsEdit(owner); ownerDetailsEdit.Show(); } The XAML of the OwnerDetailsEdit is pretty simple. Take a look at the completed solution (Chapter02/SilverlightBanking_Binding_Modes_Completed)for a complete listing. Don't forget to set the passed Owner instance as the DataContext for the OwnerDetailsGrid. This is shown in the following code: OwnerDetailsGrid.DataContext = owner; For the OneWay and TwoWay bindings to work, the object to which we are binding should be an instance of a class that implements the INotifyPropertyChanged interface. In our case, we are binding an Owner instance. This instance implements the interface correctly. The following code illustrates this: public class Owner : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; ... } Some of the data may not be updated on this screen and it will never change. For this type of binding, the Mode can be set to OneTime. This is the case for the OwnerId field. The users should neither be able to change their ID nor should the value of this field change in the background, thereby requiring an update in the UI. The following is the XAML code for this binding: <TextBlock x_Name="OwnerIdValueTextBlock" Text="{Binding OwnerId, Mode=OneTime}" > </TextBlock> The CurrentBalance TextBlock at the bottom does not need to be editable by the user (allowing a user to change his or her account balance might not be benefi cial for the bank), but it does need to change when the source changes. This is the automatic synchronization working for us and it is achieved by setting the Binding to Mode=OneWay. This is shown in the following code: <TextBlock x_Name="CurrentBalanceValueTextBlock" Text="{Binding CurrentBalance, Mode=OneWay}" > </TextBlock> The final option for the Mode property is TwoWay. TwoWay bindings allow us to persist data by pushing data from the UI control to the source object. In this case, all other fields can be updated by the user. When we enter a new value, the bound Owner instance is changed. TwoWay bindings are illustrated using the following code: <TextBox x_Name="FirstNameValueTextBlock" Text="{Binding FirstName, Mode=TwoWay}" > </TextBox> We've applied all the different binding modes at this point. Notice that when you change the values in the pop-up window, the details on the left of the screen are also updated. This is because all controls are in the background bound to the same source object as shown in the following screenshot: How it works... When we looked at the basics of data binding, we saw that a binding always occurs between a source and a target. The first one is normally an in-memory object, but it can also be a UI control. The second one will always be a UI control. Normally, data flows from source to target. However, using the Mode property, we have the option to control this. A OneTime binding should be the default for data that does not change when displayed to the user. When using this mode, the data flows from source to target. The target receives the value initially during loading and the data displayed in the target will never change. Quite logically, even if a OneTime binding is used for a TextBox, changes done to the data by the user will not flow back to the source. IDs are a good example of using OneTime bindings. Also, when building a catalogue application, OneTime bindings can be used, as we won't change the price of the items that are displayed to the user (or should we...?). We should use a OneWay binding for binding scenarios in which we want an up-to-date display of data. Data will flow from source to target here also, but every change in the values of the source properties will propagate to a change of the displayed values. Think of a stock market application where updates are happening every second. We need to push the updates to the UI of the application. The TwoWay bindings can help in persisting data. The data can now flow from source to target, and vice versa. Initially, the values of the source properties will be loaded in the properties of the controls. When we interact with these values (type in a textbox, drag a slider, and so on), these updates are pushed back to the source object. If needed, conversions can be done in both directions. There is one important requirement for the OneWay and TwoWay bindings. If we want to display up-to-date values, then the INotifyPropertyChanged interface should be implemented. The OneTime and OneWay bindings would have the same effect, even if this interface is not implemented on the source. The TwoWay bindings would still send the updated values if the interface was not implemented; however, they wouldn't notify about the changed values. It can be considered as a good practice to implement the interface, unless there is no chance that the updates of the data would be displayed somewhere in the application. The overhead created by the implementation is minimal. There's more... Another option in the binding is the UpdateSourceTrigger. It allows us to specify when a TwoWay binding will push the data to the source. By default, this is determined by the control. For a TextBox, this is done on the LostFocus event; and for most other controls, it's done on the PropertyChanged event. The value can also be set to Explicit. This means that we can manually trigger the update of the source. BindingExpression expression = this.FirstNameValueTextBlock. GetBindingExpression(TextBox.TextProperty); expression.UpdateSource(); See also Changing the values that flow between source and target can be done using converters. Data binding from Expression Blend 4 While creating data bindings is probably a task mainly reserved for the developer(s) in the team, Blend 4—the design tool for Silverlight applications—also has strong support for creating and using bindings. In this recipe, we'll build a small data-driven application that uses data binding. We won't manually create the data binding expressions; we'll use Blend 4 for this task.
Read more
  • 0
  • 0
  • 5475

article-image-customizing-backend-editing-typo3-templates
Packt
22 Nov 2010
11 min read
Save for later

Customizing the Backend Editing in TYPO3 Templates

Packt
22 Nov 2010
11 min read
TYPO3 Templates Create and modify templates with TypoScript and TemplaVoila Build dynamic and powerful TYPO3 templates using TypoScript, TemplaVoila, and other core technologies. Customize dynamic menus, logos, and headers using tricks you won’t find in the official documentation. Build content elements and template extensions to overhaul and improve TYPO3’s default back-end editing experience. Follow along with the step-by-step instructions to build a site from scratch using all the lessons in the book in a practical example. Updating the rich text editor While we are making life easier for the editors, we can work on one of the areas they have to spend the most time in: the Rich Text Editor (RTE). The rich text editor allows anybody editing text or content in the TYPO3 backend to add formatting and styling such as bold or italics without using special syntax or HTML, and see the results in the text area immediately. This is also known as a WYSIWYG (What You See Is What You Get) editor and we've used it in the previous chapters to edit our own content: Out of the box, TYPO3 comes with the htmlArea RTE (extension key: rtehtmlarea), and it really is a good editor to work with. The problem is that it's configured by default to fit everybody, so it doesn't really fit anybody particularly well without a little bit of modification: it has it's own classes that we probably don't actually use, toolbars that may give too many options, and it blocks some handy tags such as the embed and object tags that we need for embedded video. Luckily, the htmlArea RTE is very configurable like everything in TYPO3 and allows us to override or add to almost all of its rules. There's an entire TSref document (http://typo3.org/documentation/documentlibrary/extension-manuals/rtehtmlarea/current), so we're not going to try to cover everything that is possible. The TSref is being updated with the new releases, so I recommend checking it out if you have any questions or want to get more information on configurations that we have only glossed over or skipped. As a final note, some of the configuration properties in this section will work on other RTE options, but many are special for the htmlArea RTE. We are going to talk about the htmlArea RTE because it is already installed by default and has a lot of powerful options, but you may choose to install a different editor as an extension in the future. TYPO3 allows us to replace the RTE in the backend with TYPO3 extensions, and we might want to replace the default htmlArea RTE if we need to support older browsers or use special features like plugins for editing images. If you are using a different editor, you may need to look at its documentation for any differences in configuration. Editing the TSconfig Configuration of the RTE is done completely in the TSconfig for the user or the page. We've used the TypoScript template for frontend configuration and layout, but the TSconfig is mainly used for configuring backend modules in TYPO3 like the RTE. The rich text editor is a backend module, so we need to use the TSconfig to configure it for our editors. The TSconfig can be modified through the Page properties view on any page in the Options tab (shown in the following screenshot): Above you can see an example of the TSconfig with some of the modifications that we are going to look at for the RTE. Of course, you can edit the TSconfig on any page in the page tree that you would like, but we are going to be working on the Root page. Like templates, the TSconfig is inherited from the parent pages in the page tree, so we can modify all of the instances of the RTE in the entire site by modifying the Root page. CSS properties The first thing that we want to update in our editor is the CSS. Without special configuration, the htmlArea RTE uses its own default CSS to decide how the different options such as headings, bold, italics, and so on, look in the text area preview. We can update the TSconfig, to load our own external stylesheet for the RTE that more closely resembles our working frontend site; the editors will see the same basic styling for paragraphs and headings as the visitors and have a better idea of what their content will look like in the frontend. According to the TSref on htmlArea, the following stylesheets are applied to the contents of the editing area by default in ascending order (we'll talk about each one in a moment): The htmlarea-edited-content.css file from the current backend TYPO3 skin (contains selectors for use in the editor but not intended to be applied in the frontend) A CSS file generated from the mainStyleOverride and inlineStyle assignments Any CSS file specified by contentCSS property in the page TSConfig We don't need to worry about the first file, htmlarea-edited-content.css. The TYPO3 skin that styles everything in the backend controls it. By default, there is an extension named TYPO3 skin, and we can simply override any of the styles by using the contentCSS property that we will see in a minute. The mainStyleOverride and inlineStyle properties are controlled by the htmlArea RTE, and TYPO3 generates a CSS file based on their settings in the extension. The mainStyleOverride contains all of the default text settings including sizes and font choices. If we are using our own CSS, we will want to ignore the original styles, so we are going to use the ignoreMainStyleOverride property in our TSconfig. To ignore set this property for our RTE, we can add the following line to the TSconfig on our main page: RTE.default.ignoreMainStyleOverride = 1 The inlineStyle assignments in TYPO3 create default classes for use in the RTE. You've probably already noticed some of them as options in the style drop-downs in the editor (Frame with yellow background, Justify Center, Justify Right, and so on). If we override the inlineStyle assignments, the default classes are removed from the RTE. Of course, we don't need to eliminate these classes just to use our own, but we can do this if we are trying to clean up the RTE for other editors or want to make our own alignment styles. Finally, we can use the contentCSS property to load our own external stylesheet into the RTE. To use our own stylesheet, we will use the following code in the TSconfig to use a new stylesheet named rte.css in our editor: RTE.default.contentCSS = fileadmin/templates/css/rte.css At this point, we're all pretty used to the syntax of TypoScript, but a review might be in order. In the previous line of TypoScript, we are updating the contentCSS property in the default instances of the RTE object with the new value fileadmin/templates/css/rte.css. If you understand that line, then everything else we're about to cover will be easy to pick up. As we want to use our own CSS file and override the defaults at once, this will be the TypoScript for our TSconfig: RTE.default { contentCSS = fileadmin/templates/css/rte.css ignoreMainStyleOverride = 1 } Of course, the next thing we need to do is create our new stylesheet. We don't want to use the complete stylesheet that we are using for the frontend because it could be thousands of lines and will have a lot of formatting and page layout rules that we don't need for simple text editing. Instead, we can just copy the most important text and paragraph styles from our style.css file in the fileadmin/templates/css/ directory into a new file called rte.css. In addition, we're going to add two new classes to style.css and copy them over; we'll create a class named blue to set the font color to pure blue and a class named red to set the font color to pure red. This should be the content of our new file, rte.css: p, ul, div { color: #666; font-size: 12px; line-height: 18px; } h1 { font-size: 24px; line-height: 36px; margin-bottom: 18px; font-weight: 200; font-variant: small-caps; } h2 { margin-bottom: 18px; line-height: 18px; font-size: 18px; } h3 { font-size: 15px; line-height: 18px; } h4, h5, h6 { font-size: 12px; line-height: 18px; } ul, ol { margin: 0px 0px 18px 18px; } ul { list-style-type: circle; } ol { list-style: decimal; } td { padding: 5px; } :link, :visited { font-weight: bold; text-decoration: none; color: #036; } .blue { color: #0000ff; } .red { color: #ff0000; } With our new CSS we will notice at least a subtle difference in our RTE. In the following screenshot, we can see the original RTE on the left and the updated RTE on the right. As you can see, our text is now spaced out better and lightened to match the frontend: As we have overridden the default classes without creating any new ones, the Block style and Text style drop-downs are both empty now. In the RTE, block styles are used for the "blocks" of our content such as paragraphs and headings while the text styles are applied directly to words or pieces of text within a larger block. Put simply, if we wanted a whole paragraph to be blue, we would use the Block style drop-down, and we would use the Text style drop-down if we only wanted a single word to be blue. Both of these drop-downs use our CSS classes for styling, so we'll go ahead and create our new classes in the TSconfig. Classes properties All of the classes in our external CSS file (rte.css) are made available as soon as we declared the file with the contentCSS property. If we want to use them, we just need to associate them with a style type (paragraph, character, image, and so on) for the default RTE. If we wanted to associate the blue and red classes with text styles, for example, we would add the following to our page's TSconfig: RTE.default.classesCharacter = blue, red Before we assign the classes to a type, we should declare them in the TSconfig so they show up properly in the RTE. By declaring the classes, we can set the titles we want to show in the RTE and how we want the titles to be styled in the drop-downs. Without declarations, the styles will still show up, but the titles will just be CSS class names. We want to declare them in TSconfig with specific title to make our editors' lives easier. We can add the following code to the TSconfig to declare the classes in our RTE and set more descriptive titles that will be shown in blue or red in the drop-down menus: RTE.classes { blue { name = Blue Text value = color: blue; } red { name = Red Text value = color: red; } } We can use CSS classes for more than just the text style, of course. The table below shows all of the main ways that we can associate and use classes in the htmlArea RTE, but you can read more in the htmlArea TSref. Go ahead and try some of them out with the example TypoScript lines that are shown. RTE class properties Toolbar properties Along with what shows up inside the content of the RTE, we can modify the toolbar itself to control what editors are able to see and use. By controlling the toolbar, we can make the RTE easier for editors and make sure that the branding and styling is consistent. The table below shows some of the most useful properties we can use to alter the toolbar available to editors. We can actually go much further in editing the individual buttons of the toolbar, but this is changing enough between major releases that I recommend using the TSref as a reference for your version of the htmlArea RTE. HTML editor properties Finally, we can change the way that the RTE works with HTML. For example, the RTE uses paragraph tags (&ltp></p>) for all blocks in our text area, but we can replace this behavior with break tags (&ltbr />) if we want. The RTE also strips certain tags, and we can change that with RTE properties as well. The following table shows the most common properties that we can use in the Page TSconfig to modify the HTML in the htmlArea RTE. Even if we don't need anything else in this section; using allowTags to allow embedded video can make our editor" lives easier if we want to embed YouTube or Vimeo players in our website. All of this information is also available in the TSref for the htmlArea if you need an updated reference.
Read more
  • 0
  • 0
  • 5462
article-image-moodle-plugins
Packt
25 Oct 2011
8 min read
Save for later

Moodle Plugins

Packt
25 Oct 2011
8 min read
  (For more resources on Moodle, see here.) There are a number of additional plugin types; namely, Enrolments, Authentication, Message outputs, Licences, and Web services. Plugins—an overview Moodle plugins are modules that provide some specific, usually ring-fenced, functionality. You can access the plugins area via the Plugins menu that is shown in the following screenshot:   Plugins overview displays a list of all installed plugins. The information shown for each plugin includes the Plugin name, an internal Identifier, its Source (Standard or Extension), its Version (in date format), its Availability (enabled or disabled), a link to the plugin Settings, and an option to Uninstall the plugin. The table is useful to get a quick overview of what has been installed on your system and what functionality is available. Some areas contain a significant number of plugins, for instance, Authentication and Portfolios. Other categories only contain one or two plugins. The expectation is that more plugins will be developed in the future, either as part of the Moodle's core or by third-party developers. This guarantees the extensibility of Moodle without the need to change the system itself. Be careful when modifying settings in any of the plugins. Inappropriate values can cause problems throughout the system. The last plugin type in the preceding screenshot is labeled Local plugins. This is the recommended place for any local customizations. These customizations can be changes to existing functionality or the introduction of new features. For more information about local plugins, check out the readme.txt file in the local directory in your dirroot. Module plugins Moodle distinguishes between three types of module plugins that are used in the courses—the front page (which is treated as a course), the My Moodle page, and the user profile pages: Activities modules (which also covers resources) Blocks Filters   Activities modules Navigating to Plugins | Activity modules | Manage activities displays the following screen: The table displays the following information: Column Description Activity module Icon and name of the activity/resource as they appear in courses and elsewhere. Activities The number of times the activity module is used in Moodle. When you click on the number, a table, which displays the courses in which the activity module has been used, is shown. Version Version of the activity module (format YYYYMMDDHH). Hide/Show The opened eye indicates that the activity module is available for use, while the closed eye indicates that it is hidden (unavailable). Delete Performs delete action. All activities, except the Forum activity, can be deleted. Settings Link to activity module settings (not available for all items). Clicking on the Show/Hide icon toggles its state; if it is hidden it will be changed to be shown and vice versa. If an activity module is hidden, it will not appear in the Add an activity or Add a resource drop-down menu in any Moodle course. Hidden activities and resources that are already present in courses are hidden but are still in the system. It means that, once the activity module is visible again, the items will also re-appear in courses. You can delete any Moodle Activity module (except the Forum activity). If you delete an activity or resource that has been used anywhere in Moodle, all the already-created activity modules will also be deleted and so will any associated user data! Deleting an activity module cannot be undone; it has to be installed from scratch. It is highly recommended not to delete any activity modules unless you are 100 percent sure that you will never need them again! If you wish to prevent usage of an activity or resource type, it is better to hide it instead of deleting it. The Feedback activity has been around for some time as a third-party add-on. It is hidden by default because it has been newly introduced in the core of Moodle 2, due to its popularity. You might probably want to make this available for your teachers. The settings are different for each activity module. For example, the settings for the Assignment module only contain three parameters, whereas the settings for the Quiz module allow the modification of a wide range of parameters. The settings for Moodle Activity modules are not covered here, as they are mostly self-explanatory and also dealt with in great detail in the Moodle Docs of the respective modules. It is further expected that the activity modules will undergo a major overhaul in the 2.x versions to come, making any current explanations obsolete. Configuration of blocks Navigating to Plugins | Blocks | Manage blocks displays a table, as shown in the screenshot that follows. It displays the same type of information as for Activity modules. Some blocks allow multiple instances, that is, the block can be used more than once on a page. For example, you can only have one calendar, whereas you can have as many Remote RSS Feeds as you wish. You cannot control this behavior, as it is controlled by the block itself. You can delete any Moodle block. If you delete a block that is used anywhere in Moodle, all the already-created content will also be deleted. Deleting a block cannot be undone; it has to be installed from scratch. Do not delete or hide the Settings block, as you will not be able to access any system settings anymore! Also, do not delete or hide the Navigation block, as users will not be able to access a variety of pages. Most blocks are shown by default (except the Feedback and Global search blocks). Some blocks require additional settings to be set elsewhere for the block to function. For example, RSS feeds and tags have to be enabled in Advanced features, the Feedback activity module has to be shown, or global search has to be enabled (via Development | Experimental | Experimental settings).   The parameters of all standard Moodle blocks are explained in the respective Moodle Docs pages. Configuration of filters Filters scan any text that has been entered via the Moodle HTML editor and automatically transform it into different, often more complex, forms. For example, entries or concepts in glossaries are automatically hyperlinked in text, URLs pointing to MP3 or other audio files become embedded, flash-based controls (that offer pause and rewind functionality) appear, uploaded videos are given play controls, and so on. Moodle ships with 12 filters, which are accessed via Plugins | Filters | Manage filters:   By default, all filters are disabled. You can enable them by changing the Active? status to On or Off, but available. If the status is set to On, it means that the filter is activated throughout the system, but can be de-activated locally. If the status is set to Off, but available, it means that the filter is not activated, but can be enabled locally. In the preceding screenshot, the Multimedia plugins and Display emoticons as images (smileys) filters have been turned On and will be used throughout the system, as they are very popular. The TeX notation and Glossary auto-linking filters are available, but have to be activated locally. The former is only of use to the users who deal with mathematical or scientific notation and will trigger the Insert equation button in the Moodle editor. The Glossary auto-linking filter might be used in some courses. It can then be switched off temporarily at activity module level when learners have to appear for an exam. Additionally, you can change the order in which the filters are applied to text, using the up and down arrows. The filtering mechanism operates on a first-come, firstserved basis, that is, if a filter detects a text element that has to be transformed, it will do so before the next filter is applied. Each filter can be configured to be applied to Content and headings or Content only, that is, filters will be ignored in names of activity modules. The settings of some filters are described in detail in the Moodle Docs. As with activities and blocks, it is recommended to hide filters if you don't require them on your site. In addition to the filter-specific settings, Moodle provides a number of settings that are shared among all filters. These settings are accessed via the Filters | Common filters menu and are shown in the following screenshot: Setting Description Text cache lifetime It is the time for which Moodle keeps text to be filtered in a dedicated cache. Filter uploaded files By default, only text entered via the Moodle editor is filtered. If you wish to include uploaded files, you can choose any one from the HTML files only and All files options. Filter match once per page Enable this setting if the filter should stop analyzing text after it finds a match, that is, only the first occurrence will be transformed. Filter match once per text Enable this setting if the filter should only generate a single link for the first matching text instance found in each item of text on a page. This setting is ignored if the Filter match once per page parameter is enabled.  
Read more
  • 0
  • 0
  • 5458

article-image-enhancing-page-elements-moodle-and-javascript
Packt
04 May 2011
7 min read
Save for later

Enhancing Page Elements with Moodle and JavaScript

Packt
04 May 2011
7 min read
  Moodle JavaScript Cookbook Over 50 recipes for making your Moodle system more dynamic and responsive with JavaScript Introduction The Yahoo! UI Library (YUI) offers a range of widgets and utilities to bring modern enhancements to your traditional page elements. In this chapter, we will look at a selection of these enhancements, including features often seen on modern interactive interfaces, such as: Auto-complete: This feature suggests possible values to the user by searching against a list of suggestions as they start typing. We will look at two different ways of using this. First, by providing suggestions as the user types into a text box, and second, by providing a list of possible values for them to select from a combo list box. Auto-update: This technique will allow us to update an area of the page based on a timed interval, which has many uses as we'll see. In this example, we will look at how to create a clock by updating the time on the page at one second intervals. This technique could also be used, for example, to update a news feed every minute, or update stock information every hour. Resizable elements: A simple enhancement which allows users to dynamically resize elements to suit their needs. This could be applied to elements containing a significant amount of text which would allow the user to control the width of the text to suit their personal preference for ideal readability. Custom tooltips: Tooltips appear when an element is hovered, displaying the associated title or alternative text (that is, a description of an image or the title of a hyperlink). This enhancement allows us to have more control over the look of the tooltips making them more visually appealing and more consistent with the overall look and feel of our page. Custom buttons: This enhancement allows us to completely restyle button elements, modifying their look and feel to be consistent with the rest of our page. This also allows us to have a consistent button style across different platforms and web browsers. We will once again be using mostly YUI Version 2 widgets and utilities within the YUI Version 3 framework. At the time of writing, few YUI2 widgets have been ported to YUI3. This method allows us the convenience of the improvements afforded by the YUI3 environment combined with the features of the widgets from YUI2. Adding a text box with auto-complete A common feature of many web forms is the ability to provide suggestions as the user types, based on a list of possible values. In this example, we enhance a standard HTML input text element with this feature. This technique is useful in situations where we simply wish to offer suggestions to the user that they may or may not choose to select, that is, suggesting existing tags to be applied to a new blog post. They can either select a suggested value that matches one they have started typing, or simply continue typing a new, unused tag. How to do it... First, set up a basic Moodle page in the usual way. In this example, we create autocomplete.php with the following content: <?php require_once(dirname(__FILE__) . '/../config.php'); $PAGE->set_context(get_context_instance(CONTEXT_SYSTEM)); $PAGE->set_url('/cook/autocomplete.php'); $PAGE->requires->js('/cook/autocomplete.js', true); ?> <?php echo $OUTPUT->header(); ?> <div style="width:15em;height:10em;"> <input id="txtInput" type="text"> <div id="txtInput_container"></div> </div> <?php echo $OUTPUT->footer(); ?> Secondly, we need to create our associated JavaScript file, autocomplete.js, with the following code: YUI().use("yui2-autocomplete", "yui2-datasource", function(Y) { var YAHOO = Y.YUI2; var dataSource = new YAHOO.util.LocalDataSource ( ["Alpha","Bravo","Beta","Gamma","Golf"] ); var autoCompleteText = new YAHOO.widget.AutoComplete("txtInput", "txtInput_container", dataSource); }); How it works... Our HTML consists of three elements, a parent div element, and the other two elements contained within it: an input text box, and a placeholder div element to use to display the auto-complete suggestions. Our JavaScript file then defines a DataSource to be used to provide suggestions, and then creates a new AutoComplete widget based on the HTML elements we have already defined. In this example, we used a LocalDataSource for simplicity, but this may be substituted for any valid DataSource object. Once we have a DataSource object available, we instantiate a new YUI2 AutoComplete widget, passing the following arguments: The name of the HTML input text element for which to provide auto-complete suggestions The name of the container element to use to display suggestions A valid data source object to use to find suggestions Now when the user starts typing into the text box, any matching auto-complete suggestions are displayed and can be selected, as shown in the following screenshot: Adding a combo box with auto-complete In this example, we will use auto-complete in conjunction with a combo box (drop-down list). This differs from the previous example in one significant way—it includes a drop-down arrow button that allows the user to see the complete list of values without typing first. This is useful in situations where the user may be unsure of a suitable value. In this case, they can click the drop-down button to see suggestions without having to start guessing as they type. Additionally, this method also supports the same match-as-you-type style auto-complete as that of the previous recipe. How to do it... Open the autocomplete.php file from the previous recipe for editing, and add the following HTML below the text box based auto-complete control: <div style="width:15em;height:10em;"> <input id="txtCombo" type="text" style="vertical-align:top; position:static;width:11em;"><span id="toggle"></span> <div id="txtCombo_container"></div> </div> Next, open the JavaScript file autocomplete.js, and modify it to match the following code: YUI().use("yui2-autocomplete", "yui2-datasource", "yui2-element", "yui2-button", "yui2-yahoo-dom-event", function(Y) { var YAHOO = Y.YUI2; var dataSource = new YAHOO.util.LocalDataSource ( ["Alpha","Bravo","Beta","Gamma","Golf"] ); var autoCompleteText = new YAHOO.widget.AutoComplete("txtInput", "txtInput_container", dataSource); var autoCompleteCombo = new YAHOO.widget.AutoComplete("txtCombo", "txtCombo_container", dataSource, {minQueryLength: 0, queryDelay: 0}); var toggler = YAHOO.util.Dom.get("toggle"); var tButton = new YAHOO.widget.Button({container:toggler, label:"↓"}); var toggle = function(e) { if(autoCompleteCombo.isContainerOpen()) { autoCompleteCombo.collapseContainer(); } else { autoCompleteCombo.getInputEl().focus(); setTimeout(function() { autoCompleteCombo.sendQuery(""); },0); } } tButton.on("click", toggle); }); You will notice that the HTML we added in this recipe is very similar to the last, with the exception that we included a span element just after the text box. This is used as a placeholder to insert a YUI2 button control. This recipe is somewhat more complicated than the previous one, so we included some extra YUI2 modules: element, button, and yahoo-dom-event. We define the AutoComplete widget in the same way as before, except we need to add two configuration options in an object passed as the fourth argument. Next, we retrieve a reference to the button placeholder, and instantiate a new Button widget to use as the combo box 'drop-down' button. Finally, we define a click handler for the button, and register it. We now see the list of suggestions, which can be displayed by clicking on the drop-down button, as shown in the following screenshot: How it works... The user can type into the box to receive auto-complete suggestions as before, but may now use the combo box style drop-down button instead to see a list of suggestions. When the user clicks the drop-down button, the click event is fired. This click event does the following: Hides the drop-down menu if it is displayed, which allows the user to toggle this list display on/off. If it is not displayed, it sets the focus to the text box (to allow the user to continue typing), and execute a blank query on the auto-complete widget, which will display the list of suggestions. Note that we explicitly enabled this blank query earlier when we defined the AutoComplete widget with the "minQueryLength: 0" option, which allowed queries of length 0 and above.  
Read more
  • 0
  • 0
  • 5449

article-image-getting-started-modernizr-using-php-ide
Packt
30 Apr 2013
5 min read
Save for later

Getting started with Modernizr using PHP IDE

Packt
30 Apr 2013
5 min read
(For more resources related to this topic, see here.) From the Modernizr website: Modernizr is a small JavaScript library that detects the availability of native implementations for next-generation web technologies, i.e. features that stem from the HTML5 and CSS3 specifications. Many of these features are already implemented in at least one major browser (most of them in two or more), and what Modernizr does is, very simply, tell you whether the current browser has this feature natively implemented or not. Basically with this library, we can see if the user's browser can support certain features you wish to use on your site. This is important to do, as unfortunately not every browser is created the same. Each one has its own implementation of the HTML5 standard, so some features may be available on Google Chrome but not on Internet Explorer. Using Modernizr is a better alternative to the standard, but it is unreliable, user agent (UA) string checking. Let's begin. Getting ready Go ahead and create a new Web Project in Aptana Studio. Once it is set up, go ahead and add a new folder to the project named js. Next thing we need to do is to download the Development Version of Mondernizr from the Modernizr download page (http://modernizr.com/download/). You will see options to build your own package. The development version will do until you are ready for production use. As of this writing, the latest version is 2.6.2 and that will be the version we use. Place the downloaded file into the js folder. How to do it... Follow these steps: For this exercise, we will simply do a browser test to see if your browser currently supports the HTML5 Canvas element. Type this into a JavaScript file named canvas.js and add the following code: if (Modernizr.canvas) { var c=document.getElementById("canvastest"); var ctx=c.getContext("2d"); // Create gradient Var grd=ctx.createRadialGradient(75,50,5,90,60,100); grd.addColorStop(0,"black"); grd.addColorStop(1,"white"); // Fill with gradient ctx.fillStyle=grd; ctx.fillRect(10,10,150,80); alert("We can use the Canvas element!"); } else { alert("Canvas Element Not Supported"); } Now add the following to index.html: <!DOCTYPE html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Canvas Support Test</title> <script src = "js/modernizr-latest.js" type="text/ javascript"></script> </head> <body> <canvas id="canvastest" width="200" height="100" style="border:1px solid #000000">Your browser does not support the HTML5 canvas tag.</canvas> <script src = "js/canvas.js"> </script> </body> </html> Let's preview the code and see what we got. The following screenshot is what you should see: How it works... What did we just do? Well, let's break it down: <script src = "js/modernizr-latest.js" type="text/javascript"></script> Here, we are calling in our Modernizr library that we downloaded previously. Once you do that, Modernizr does some things to your page. It will redo your opening <html> tag to something like the following (from Google Chrome): <html class=" js flexbox flexboxlegacy canvas canvastext webgl notouch geolocation postmessage websqldatabase indexeddb hashchange history draganddrop websockets rgba hsla multiplebgs backgroundsize borderimage borderradius boxshadow textshadow opacity cssanimations csscolumns cssgradients cssreflections csstransforms csstransforms3d csstransitions fontface generatedcontent video audio localstorage sessionstorage webworkers applicationcache svg inlinesvg smil svgclippaths"> This is all the features your browser supports that Modernizr was able to detect. Next up we have our <canvas> element: <canvas id="canvastest" width="200" height="100" style="border:1px solid #000000">Your browser does not support the HTML5 canvas tag.</ canvas> Here, we are just forming a basic canvas that is 200 x 100 with a black border going around it. Now for the good stuff in our canvas.js file, follow this code snippet: <script> if (Modernizr.canvas) { alert("We can use the Canvas element!"); var c=document.getElementById("canvastest"); var ctx=c.getContext("2d"); // Create gradient var grd=ctx.createRadialGradient(75,50,5,90,60,100); grd.addColorStop(0,"black"); grd.addColorStop(1,"white"); // Fill with gradient ctx.fillStyle=grd; ctx.fillRect(10,10,150,80); } else { alert("Canvas Element Not Supported"); } </script> In the first part of this snippet, we used an if statement to see if the browser supports the Canvas element. If it does support canvas, then we are displaying a JavaScript alert and then filling our canvas element with a black gradient. After that, we have our else statement that will alert the user that canvas is not supported on their browser. They will also see the Your browser does not support the HTML5 canvas tag message. That wasn't so bad, was it? There's more... I highly recommend reading over the documentation on the Modernizr website so that you can see all the feature tests you can do with this library. We will do a few more practice examples with Modernizr, and of course, it will be a big component of our RESS project later on in the book. Keeping it efficient For a production environment, I highly recommend taking the build-a-package approach and only downloading a script that contains the tests you will actually use. This way your script is as small as possible. As of right now, the file we used has every test in it; some you may never use. So, to be as efficient as possible (and we want all the efficiency we can get in mobile development), build your file with the tests you'll use or may use. Summary This article provided guidelines on creating a new Web Project in Aptana Studio, creating new folder to the project named js, downloading the Development Version of Mondernizr from the Modernizr download page, and placing the downloaded file into the js folder. Resources for Article : Further resources on this subject: Let's Chat [Article] Blocking versus Non blocking scripts [Article] Building Applications with Spring Data Redis [Article]
Read more
  • 0
  • 0
  • 5445
article-image-setting-openvpn-x509-certificates
Packt
23 Oct 2009
6 min read
Save for later

Setting Up OpenVPN with X509 Certificates

Packt
23 Oct 2009
6 min read
Creating Certificates One method could be setting up tunnels using pre-shared keys with static encryption, however, X509 certificates provide a much better level of security than pre-shared keys do. There is, however, slightly more work to be done to set up and connect two systems with certificate-based authentication. The following five steps have to be accomplished: Create a CA certificate for your CA with which we will sign and revoke client certificates.      Create a key and a certificate request for the clients.      Sign the request using the CA certificate and thereby making it valid.      Provide keys and certificates to the VPN partners.      Change the OpenVPN configuration so that OpenVPN will use the certificates and keys, and restart OpenVPN. There are a number of ways to accomplish these steps. easy-rsa is a command-line tool that comes with OpenVPN, and exists both on Linux and Windows. On Windows systems you could create certificates by clicking on the batch files in the Windows Explorer, but starting the batch files at the command-line prompt should be the better solution. On Linux you type the full path of the scripts, which share the same name as on Windows, simply without the extension .bat. Certificate Generation on Windows XP with easy-rsa Open the Windows Explorer and change to the directory C:Program Files OpenVPNeasy-rsa. The Windows version of easy-rsa consists of thirteen files. On Linux systems you will have to check your package management tools to find the right path to the easy-rsa scripts. On Debian Linux you will find them in /usr/share/doc/openvpn/examples/easy-rsa/. You find there are eight batch files, four configuration files, and a README (which is actually not really helpful). However, we must now create a directory called keys, copy the files serial.start and index.txt.start into it, and rename them to serial and index.txt respectively. The keys and certificates created by easy-rsa will be stored in this directory. These files are used as a database for certificate generation. Now we let easy-rsa prepare the standard configuration for our certificates. Double-click on the file C:Program FilesOpenVPNeasy-rsainit-config.bat or start this batch file at a command-line prompt. It simply copies the template files vars.bat.sample to vars.bat and openssl.cnf.sample to openvpn.ssl. While the file openssl is a standard OpenSSL configuration, the file vars.bat contains variables used by OpenVPN's scripts to create our certificates, and needs some editing in the next step. Setting Variables—Editing vars.bat Right-click on the vars.bat file's icon and select from the menu. In this file, several parameters are set that are used by the certificate generation scripts later. The following table gives a quick overview of the entries in the file: Entry in vars.bat Function set HOME=%ProgramFiles%OpenVPN easy-rsa The path to the directory where easy-rsa resides. set KEY_CONFIG=openssl.cnf The name of the OpenSSL configuration file. set KEY_DIR=keys The path to the directory where the newly generated keys are stored-relative to $HOME as set above. set KEY_SIZE=1024 The length of the SSL key. This parameter should be increased to 2048. set KEY_COUNTRY=US set KEY_PROVINCE=CA set KEY_CITY=SanFrancisco set KEY_ORG=FortFunston set KEY_EMAIL=mail@host.domain These five values are used as suggestions whenever you start a script and generate certificates with the easy-rsa software. Only the entry KEY_SIZE must be changed (unless you don't care much about security), but setting the last five entries to your needs might be very helpful later. Every time we generate a certificate, easy-rsa will ask (among others) for these five parameters, and give a suggestion that could be accepted simply by pressing Enter. The better the default values set here in vars.bat fit our needs, the less typing work we will have later. I leave it up to you to change these settings here. The next step is easy. Run vars.bat to set the variables. Even though you could simply double-click on its explorer icon, I recommend that you run it in a shell window. Select the entry Run from Windows' main menu, type cmd.exe, and change to the easy-rsa directory by typing cd "C:Program FilesOpenVPNeasy-rsa" and pressing Enter. By doing so, we will proceed in exactly the same way as we would do on a Linux system (except for the .bat extensions). Creating the Diffie-Hellman Key Now it is time to create the keys that will be used for encryption, authentication, and key exchange. For the latter, a Diffie-Hellman key is used by OpenVPN. The Diffie-Hellman key agreement protocol enables two communication partners to exchange a secret key safely. No prior secrets or safe lines are needed; a special mathematical algorithm guarantees that only the two partners know the used shared key. If you would like to know exactly what this algebra is about, have a look at this website: http://www.rsasecurity.com/rsalabs/node.asp?id=2248. easy-rsa provides a script (batch) file that generates the key for you: C:Program FilesOpenVPNeasy-rsabuild-dh.bat. Start it by typing build-dh.bat. A Diffie-Hellman key is being generated. The batch file tells you, This is going to take a long time, which is only true if your system is really old or if you are not patient enough. However, on modern systems some minutes may be a time span horribly long! Building the Certificate Authority OK, now it's time to generate our first CA. Enter build-ca.bat. This script generates a self-signed certificate for a CA. Such a certificate can be used to create and sign client certificates and thereby authenticate other machines. Depending on the data you entered in your vars.bat file, build-ca.bat will suggest different default parameters during the process of generating this certificate. Five of the last seven lines are taken from the variables set in vars.bat. If you edited these parameters, a simple return will do here and the certificate for the CA is generated in the keys directory.
Read more
  • 0
  • 0
  • 5430

article-image-wpf-45-application-and-windows
Packt
24 Sep 2012
14 min read
Save for later

WPF 4.5 Application and Windows

Packt
24 Sep 2012
14 min read
Creating a window Windows are the typical top level controls in WPF. By default, a MainWindow class is created by the application wizard and automatically shown upon running the application. In this recipe, we'll take a look at creating and showing other windows that may be required during the lifetime of an application. Getting ready Make sure Visual Studio is up and running. How to do it... We'll create a new class derived from Window and show it when a button is clicked: Create a new WPF application named CH05.NewWindows. Right-click on the project node in Solution explorer, and select Add | Window…: In the resulting dialog, type OtherWindow in the Name textbox and click on Add. A file named OtherWindow.xaml should open in the editor. Add a TextBlock to the existing Grid, as follows: <TextBlock Text="This is the other window" FontSize="20" VerticalAlignment="Center" HorizontalAlignment="Center" /> Open MainWindow.xaml. Add a Button to the Grid with a Click event handler: <Button Content="Open Other Window" FontSize="30" Click="OnOpenOtherWindow" /> In the Click event handler, add the following code: void OnOpenOtherWindow(object sender, RoutedEventArgs e) { var other = new OtherWindow(); other.Show(); } Run the application, and click the button. The other window should appear and live happily alongside the main window: How it works... A Window is technically a ContentControl, so can contain anything. It's made visible using the Show method. This keeps the window open as long as it's not explicitly closed using the classic close button, or by calling the Close method. The Show method opens the window as modeless—meaning the user can return to the previous window without restriction. We can click the button more than once, and consequently more Window instances would show up. There's more... The first window shown can be configured using the Application.StartupUri property, typically set in App.xaml. It can be changed to any other window. For example, to show the OtherWindow from the previous section as the first window, open App.xaml and change the StartupUri property to OtherWindow.xaml: StartupUri="OtherWindow.xaml" Selecting the startup window dynamically Sometimes the first window is not known in advance, perhaps depending on some state or setting. In this case, the StartupUri property is not helpful. We can safely delete it, and provide the initial window (or even windows) by overriding the Application.OnStartup method as follows (you'll need to add a reference to the System.Configuration assembly for the following to compile): protected override void OnStartup(StartupEventArgs e) {    Window mainWindow = null;    // check some state or setting as appropriate          if(ConfigurationManager.AppSettings["AdvancedMode"] == "1")       mainWindow = new OtherWindow();    else       mainWindow = new MainWindow();    mainWindow.Show(); } This allows complete flexibility in determining what window or windows should appear at application startup. Accessing command line arguments The WPF application created by the New Project wizard does not expose the ubiquitous Main method. WPF provides this for us – it instantiates the Application object and eventually loads the main window pointed to by the StartupUri property. The Main method, however, is not just a starting point for managed code, but also provides an array of strings as the command line arguments passed to the executable (if any). As Main is now beyond our control, how do we get the command line arguments? Fortunately, the same OnStartup method provides a StartupEventArgs object, in which the Args property is mirrored from Main. The downloadable source for this chapter contains the project CH05.CommandLineArgs, which shows an example of its usage. Here's the OnStartup override: protected override void OnStartup(StartupEventArgs e) { string text = "Hello, default!"; if(e.Args.Length > 0) text = e.Args[0]; var win = new MainWindow(text); win.Show(); } The MainWindow instance constructor has been modified to accept a string that is later used by the window. If a command line argument is supplied, it is used. Creating a dialog box A dialog box is a Window that is typically used to get some data from the user, before some operation can proceed. This is sometimes referred to as a modal window (as opposed to modeless, or non-modal). In this recipe, we'll take a look at how to create and manage such a dialog box. Getting ready Make sure Visual Studio is up and running. How to do it... We'll create a dialog box that's invoked from the main window to request some information from the user: Create a new WPF application named CH05.Dialogs. Add a new Window named DetailsDialog.xaml (a DetailsDialog class is created). Visual Studio opens DetailsDialog.xaml. Set some Window properties: FontSize to 16, ResizeMode to NoResize, SizeToContent to Height, and make sure the Width is set to 300: ResizeMode="NoResize" SizeToContent="Height" Width="300" FontSize="16" Add four rows and two columns to the existing Grid, and add some controls for a simple data entry dialog as follows: <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition /> </Grid.ColumnDefinitions> <TextBlock Text="Please enter details:" Grid.ColumnSpan="2" Margin="4,4,4,20" HorizontalAlignment="Center"/> <TextBlock Text="Name:" Grid.Row="1" Margin="4"/> <TextBox Grid.Column="1" Grid.Row="1" Margin="4" x_Name="_name"/> <TextBlock Text="City:" Grid.Row="2" Margin="4"/> <TextBox Grid.Column="1" Grid.Row="2" Margin="4" x_Name="_city"/> <StackPanel Grid.Row="3" Orientation="Horizontal" Margin="4,20,4,4" Grid.ColumnSpan="2" HorizontalAlignment="Center"> <Button Content="OK" Margin="4"  /> <Button Content="Cancel" Margin="4" /> </StackPanel> This is how it should look in the designer: The dialog should expose two properties for the name and city the user has typed in. Open DetailsDialog.xaml.cs. Add two simple properties: public string FullName { get; private set; } public string City { get; private set; } We need to show the dialog from somewhere in the main window. Open MainWindow.xaml, and add the following markup to the existing Grid: <Grid.RowDefinitions>     <RowDefinition Height="Auto" />     <RowDefinition /> </Grid.RowDefinitions> <Button Content="Enter Data" Click="OnEnterData"         Margin="4" FontSize="16"/> <TextBlock FontSize="24" x_Name="_text" Grid.Row="1"     VerticalAlignment="Center" HorizontalAlignment="Center"/> In the OnEnterData handler, add the following: private void OnEnterData(object sender, RoutedEventArgs e) { var dlg = new DetailsDialog(); if(dlg.ShowDialog() == true) { _text.Text = string.Format( "Hi, {0}! I see you live in {1}.", dlg.FullName, dlg.City); } } Run the application. Click the button and watch the dialog appear. The buttons don't work yet, so your only choice is to close the dialog using the regular close button. Clearly, the return value from ShowDialog is not true in this case. When the OK button is clicked, the properties should be set accordingly. Add a Click event handler to the OK button, with the following code: private void OnOK(object sender, RoutedEventArgs e) { FullName = _name.Text; City = _city.Text; DialogResult = true; Close(); } The Close method dismisses the dialog, returning control to the caller. The DialogResult property indicates the returned value from the call to ShowDialog when the dialog is closed. Add a Click event handler for the Cancel button with the following code: private void OnCancel(object sender, RoutedEventArgs e) { DialogResult = false; Close(); } Run the application and click the button. Enter some data and click on OK: You will see the following window: How it works... A dialog box in WPF is nothing more than a regular window shown using ShowDialog instead of Show. This forces the user to dismiss the window before she can return to the invoking window. ShowDialog returns a Nullable (can be written as bool? in C#), meaning it can have three values: true, false, and null. The meaning of the return value is mostly up to the application, but typically true indicates the user dismissed the dialog with the intention of making something happen (usually, by clicking some OK or other confirmation button), and false means the user changed her mind, and would like to abort. The null value can be used as a third indicator to some other application-defined condition. The DialogResult property indicates the value returned from ShowDialog because there is no other way to convey the return value from the dialog invocation directly. That's why the OK button handler sets it to true and the Cancel button handler sets it to false (this also happens when the regular close button is clicked, or Alt + F4 is pressed). Most dialog boxes are not resizable. This is indicated with the ResizeMode property of the Window set to NoResize. However, because of WPF's flexible layout, it certainly is relatively easy to keep a dialog resizable (and still manageable) where it makes sense (such as when entering a potentially large amount of text in a TextBox – it would make sense if the TextBox could grow if the dialog is enlarged). There's more... Most dialogs can be dismissed by pressing Enter (indicating the data should be used) or pressing Esc (indicating no action should take place). This is possible to do by setting the OK button's IsDefault property to true and the Cancel button's IsCancel property to true. The default button is typically drawn with a heavier border to indicate it's the default button, although this eventually depends on the button's control template. If these settings are specified, the handler for the Cancel button is not needed. Clicking Cancel or pressing Esc automatically closes the dialog (and sets DiaglogResult to false). The OK button handler is still needed as usual, but it may be invoked by pressing Enter, no matter what control has the keyboard focus within the Window. The CH05.DefaultButtons project from the downloadable source for this chapter demonstrates this in action. Modeless dialogs A dialog can be show as modeless, meaning it does not force the user to dismiss it before returning to other windows in the application. This is done with the usual Show method call – just like any Window. The term dialog in this case usually denotes some information expected from the user that affects other windows, sometimes with the help of another button labelled "Apply". The problem here is mostly logical—how to convey the information change. The best way would be using data binding, rather than manually modifying various objects. We'll take an extensive look at data binding in the next chapter. Using the common dialog boxes Windows has its own built-in dialog boxes for common operations, such as opening files, saving a file, and printing. Using these dialogs is very intuitive from the user's perspective, because she has probably used those dialogs before in other applications. WPF wraps some of these (native) dialogs. In this recipe, we'll see how to use some of the common dialogs. Getting ready Make sure Visual Studio is up and running. How to do it... We'll create a simple image viewer that uses the Open common dialog box to allow the user to select an image file to view: Create a new WPF Application named CH05.CommonDialogs. Open MainWindow.xaml. Add the following markup to the existing Grid: <Grid.RowDefinitions>     <RowDefinition Height="Auto" />     <RowDefinition /> </Grid.RowDefinitions> <Button Content="Open Image" FontSize="20" Click="OnOpenImage"         HorizontalAlignment="Center" Margin="4" /> <Image Grid.Row="1" x_Name="_img" Stretch="Uniform" /> Add a Click event handler for the button. In the handler, we'll first create an OpenFileDialog instance and initialize it (add a using to the Microsoft.Win32 namespace): void OnOpenImage(object sender, RoutedEventArgs e) { var dlg = new OpenFileDialog { Filter = "Image files|*.png;*.jpg;*.gif;*.bmp", Title = "Select image to open", InitialDirectory = Environment.GetFolderPath( Environment.SpecialFolder.MyPictures) }; Now we need to show the dialog and use the selected file (if any): if(dlg.ShowDialog() == true) { try { var bmp = new BitmapImage(new Uri(dlg.FileName)); _img.Source = bmp; } catch(Exception ex) { MessageBox.Show(ex.Message, "Open Image"); } } Run the application. Click the button and navigate to an image file and select it. You should see something like the following: How it works... The OpenFileDialog class wraps the Win32 open/save file dialog, providing easy enough access to its capabilities. It's just a matter of instantiating the object, setting some properties, such as the file types (Filter property) and then calling ShowDialog. This call, in turn, returns true if the user selected a file and false otherwise (null is never returned, although the return type is still defined as Nullable for consistency). The look of the Open file dialog box may be different in various Windows versions. This is mostly unimportant unless some automated UI testing is done. In this case, the way the dialog looks or operates may have to be taken into consideration when creating the tests. The filename itself is returned in the FileName property (full path). Multiple selections are possible by setting the MultiSelect property to true (in this case the FileNames property returns the selected files). There's more... WPF similarly wraps the Save As common dialog with the SaveFileDialog class (in the Microsoft.Win32 namespace as well). Its use is very similar to OpenFileDialog (in fact, both inherit from the abstract FileDialog class). What about folder selection (instead of files)? The WPF OpenFileDialog does not support that. One solution is to use Windows Forms' FolderBrowseDialog class. Another good solution is to use the Windows API Code Pack described shortly. Another common dialog box WPF wraps is PrintDialog (in System.Windows.Controls). This shows the familiar print dialog, with options to select a printer, orientation, and so on. The most straightforward way to print would be calling PrintVisual (after calling ShowDialog), providing anything that derives from the Visual abstract class (which include all elements). General printing is a complex topic and is beyond the scope of this book. What about colors and fonts? Windows also provides common dialogs for selecting colors and fonts. However, these are not wrapped by WPF. There are several alternatives: Use the equivalent Windows Forms classes (FontDialog and ColorDialog, both from System.Windows.Forms) Wrap the native dialogs yourself Look for alternatives on the Web The first option is possible, but has two drawbacks: first, it requires adding reference to the System.Windows.Forms assembly; this adds a dependency at compile time, and increases memory consumption at run time, for very little gain. The second drawback has to do with the natural mismatch between Windows Forms and WPF. For example, ColorDialog returns a color as a System.Drawing.Color, but WPF uses System.Windows.Media.Color. This requires mapping a GDI+ color (WinForms) to WPF's color, which is cumbersome at best. The second option of doing your own wrapping is a non-trivial undertaking and requires good interop knowledge. The other downside is that the default color and font common dialogs are pretty old (especially the color dialog), so there's much room for improvement. The third option is probably the best one. There are more than a few good candidates for color and font pickers. For a color dialog, for example, you can use the ColorPicker or ColorCanvas provided with the Extended WPF toolkit library on CodePlex (http://wpftoolkit.codeplex.com/). Here's how these may look (ColorCanvas on the left-hand side, and one of the possible views of ColorPicker on the right-hand side): The Windows API Code Pack The Windows API Code Pack is a Microsoft project on CodePlex (http://archive.msdn.microsoft.com/WindowsAPICodePack) that provides many .NET wrappers to native Windows features, in various areas, such as shell, networking, Windows 7 features (this is less important now as WPF 4 added first class support for Windows 7), power management, and DirectX. One of the Shell features in the library is a wrapper for the Open dialog box that allows selecting a folder instead of a file. This has no dependency on the WinForms assembly.
Read more
  • 0
  • 0
  • 5420
Modal Close icon
Modal Close icon