Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Server-Side Web Development

406 Articles
article-image-nodejs-fundamentals-and-asynchronous-javascript
Packt
19 Feb 2016
5 min read
Save for later

Node.js Fundamentals and Asynchronous JavaScript

Packt
19 Feb 2016
5 min read
Node.js is a JavaScript-driven technology. The language has been in development for more than 15 years, and it was first used in Netscape. Over the years, they've found interesting and useful design patterns, which will be of use to us in this book. All this knowledge is now available to Node.js coders. Of course, there are some differences because we are running the code in different environments, but we are still able to apply all these good practices, techniques, and paradigms. I always say that it is important to have a good basis to your applications. No matter how big your application is, it should rely on flexible and well-tested code (For more resources related to this topic, see here.) Node.js fundamentals Node.js is a single-threaded technology. This means that every request is processed in only one thread. In other languages, for example, Java, the web server instantiates a new thread for every request. However, Node.js is meant to use asynchronous processing, and there is a theory that doing this in a single thread could bring good performance. The problem of the single-threaded applications is the blocking I/O operations; for example, when we need to read a file from the hard disk to respond to the client. Once a new request lands on our server, we open the file and start reading from it. The problem occurs when another request is generated, and the application is still processing the first one. Let's elucidate the issue with the following example: var http = require('http'); var getTime = function() { var d = new Date(); return d.getHours() + ':' + d.getMinutes() + ':' + d.getSeconds() + ':' + d.getMilliseconds(); } var respond = function(res, str) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end(str + 'n'); console.log(str + ' ' + getTime()); } var handleRequest = function (req, res) { console.log('new request: ' + req.url + ' - ' + getTime()); if(req.url == '/immediately') { respond(res, 'A'); } else { var now = new Date().getTime(); while(new Date().getTime() < now + 5000) { // synchronous reading of the file } respond(res, 'B'); } } http.createServer(handleRequest).listen(9000, '127.0.0.1'); The http module, which we initialize on the first line, is needed for running the web server. The getTime function returns the current time as a string, and the respond function sends a simple text to the browser of the client and reports that the incoming request is processed. The most interesting function is handleRequest, which is the entry point of our logic. To simulate the reading of a large file, we will create a while cycle for 5 seconds. Once we run the server, we will be able to make an HTTP request to http://localhost:9000. In order to demonstrate the single-thread behavior we will send two requests at the same time. These requests are as follows: One request will be sent to http://localhost:9000, where the server will perform a synchronous operation that takes 5 seconds The other request will be sent to http://localhost:9000/immediately, where the server should respond immediately The following screenshot is the output printed from the server, after pinging both the URLs: As we can see, the first request came at 16:58:30:434, and its response was sent at 16:58:35:440, that is, 5 seconds later. However, the problem is that the second request is registered when the first one finishes. That's because the thread belonging to Node.js was busy processing the while loop. Of course, Node.js has a solution for the blocking I/O operations. They are transformed to asynchronous functions that accept callback. Once the operation finishes, Node.js fires the callback, notifying that the job is done. A huge benefit of this approach is that while it waits to get the result of the I/O, the server can process another request. The entity that handles the external events and converts them into callback invocations is called the event loop. The event loop acts as a really good manager and delegates tasks to various workers. It never blocks and just waits for something to happen; for example, a notification that the file is written successfully. Now, instead of reading a file synchronously, we will transform our brief example to use asynchronous code. The modified example looks like the following code: var handleRequest = function (req, res) { console.log('new request: ' + req.url + ' - ' + getTime()); if(req.url == '/immediately') { respond(res, 'A'); } else { setTimeout(function() { // reading the file respond(res, 'B'); }, 5000); } } The while loop is replaced with the setTimeout invocation. The result of this change is clearly visible in the server's output, which can be seen in the following screenshot: The first request still gets its response after 5 seconds. However, the second one is processed immediately. Summary In this article, we went through the most common programming paradigms in Node.js. We learned how Node.js handles parallel requests. We understood how to write modules and make them communicative. We saw the problems of the asynchronous code and their most popular solutions. For more information on Node.js you can refer to the following URLs: https://www.packtpub.com/web-development/mastering-nodejs https://www.packtpub.com/web-development/deploying-nodejs Resources for Article: Further resources on this subject: Node.js Fundamentals [Article] AngularJS Project [Article] Working with Live Data and AngularJS [Article]
Read more
  • 0
  • 0
  • 5556

article-image-professional-plone-development-foreword-alexander-limi
Packt
22 Oct 2009
9 min read
Save for later

Professional Plone Development: Foreword by Alexander Limi

Packt
22 Oct 2009
9 min read
  Foreword by Alexander Limi, co-founder of Plone It's always fascinating how life throws you a loop now and then that changes your future in a profound way—and you don't realize it at the time. As I sit here almost six years after the Plone project started, it seems like a good time to reflect on how the last years changed everything, and some of the background of why you are holding this book in your hands—because the story about the Plone community is at least as remarkable as the software itself. It all started out in a very classic way—I had just discovered Zope and Python, and wanted to build a simple web application to teach myself how they worked. This was back in 1999, when Zope was still a new, unproven technology, and had more than a few rough spots. I have never been a programmer, but Python made it all seem so simple that I couldn't resist trying to build a simple web application with it. After reading what I could find of documentation at the time, I couldn't quite figure it out—so I ended up in the online Zope chat rooms to see if I could get any help with building my web application. Little did I know that what happened that evening would change my life in a significant way. I met Alan Runyan online, and after trying to assist me, we ended up talking about music instead. We also reached the conclusion that I should focus on what I was passionate about—instead of coding, I wanted to build great user interfaces and make things easy to use. Alan wanted to provide the plumbing to make the system work. For some reason, it just clicked at that point, and we collaborated online and obsessed over the details of the system for months. External factors were probably decisive here too: I was without a job, and my girlfriend had left me a few months prior; Alan had just given up his job as a Java programmer at a failed dot-com company and decided to start his own company doing Python instead—so we both ended up pouring every living hour into the project, and moving at a break-neck pace towards getting the initial version out. We ended up getting a release ready just before the EuroPython Conference in 2002, and this was actually the first time I met Alan in person. We had been working on Plone for the past year just using email and IRC chat—two technologies that are still cornerstones of Plone project communication. I still remember the delight in discovering that we had excellent communication in person as well. What happened next was somewhat surreal for people new to this whole thing: we were sitting in the audience in the "State of Zope" talk held by Paul Everitt. He got to the part of his talk where he called attention to people and projects that he was especially impressed with. When he called out our names and talked about how much he liked Plone—which at this point was still mostly the effort of a handful of people—it made us feel like we were really onto something. This was our defining moment. For those of you who don't know Paul, he is one of the founders of Zope Corporation, and would go on to become our most tireless and hard-working supporter. He got involved in all the important steps that would follow—he put a solid legal and marketing story in place and helped create the Plone Foundation—and did some great storytelling along the way. There is no way to properly express how much Paul has meant to us personally—and to Plone—five years later. His role was crucial in the story of Plone's success, and the project would not be where it is now without him. Looking back, it sounds a bit like the classic romanticized start-up stories of Silicon Valley, except that we didn't start a company together. We chose to start two separate companies—in hindsight a very good decision. It never ceases to amaze me how much of an impact the project has had since. We are now an open-source community of hundreds of companies doing Plone development, training, and support. In just the past month, large companies like Novell and Akamai—as well as government agencies like the CIA, and NGOs like Oxfam—have revealed that they are using Plone for their web content management, and more will follow. The Plone Network site, plone.net, lists over 150 companies that offer Plone services, and the entire ecosystem is estimated to have revenues in the hundreds of millions of US dollars annually. This year's Plone Conference in Naples, Italy is expected to draw over 300 developers and users from around the world. Not bad for a system that was conceived and created by a handful of people standing on the shoulders of the giants of the Zope and Python communities. But the real story here is about an amazing community of people—individuals and organizations, large and small—all coming together to create the best content management system on the planet. We meet in the most unlikely locations—from ancient castles and mountain-tops in Austria, to the archipelagos and fjords of Norway, the sandy beaches of Brazil, and the busy corporate offices of Google in Silicon Valley. These events are at the core of the Plone experience, and developers nurture deep friendships within the community. I can say without a doubt that these are the smartest, kindest, most amazing people I have ever had the pleasure to work with. One of those people is Martin Aspeli, whose book you are reading right now. Even though we're originally from the same country, we didn't meet that way. Martin was at the time—and still is—living in London. He had contributed some code to one of our community projects a few months prior, and suggested that we should meet up when he was visiting his parents in Oslo, Norway. It was a cold and dark winter evening when we met at the train station—and ended up talking about how to improve Plone and the community process at a nearby café. I knew there and then that Martin would become an important part of the Plone project. Fast-forward a few years, and Martin has risen to become one of Plone's most important and respected—not to mention prolific—developers. He has architected and built several core components of the Plone 3 release; he has been one of the leaders on the documentation team, as well as an active guide in Plone's help forums. He also manages to fit in a day job at one of the "big four" consulting companies in the world. On top of all this, he was secretly working on a book to coincide with the Plone 3.0 release—which you are now the lucky owner of. This brings me to why this book is so unique, and why we are lucky to have Martin as part of our community. In the fast-paced world of open-source development—and Plone in particular—we have never had the chance to have a book that was entirely up-to-date on all subjects. There have been several great books in the past, but Martin has raised the bar further—by using the writing of a book to inform the development of Plone. If something didn't make sense, or was deemed too complex for the problem it was trying to solve—he would update that part of Plone so that it could be explained in simpler terms. It made the book better, and it has certainly made Plone better. Another thing that sets Martin's book apart is his unparalleled ability to explain advanced and powerful concepts in a very accessible way. He has years of experience developing with Plone and answering questions on the support forums, and is one of the most patient and eloquent writers around. He doesn't give up until you know exactly what's going on. But maybe more than anything, this book is unique in its scope. Martin takes you through every step from installing Plone, through professional development practices, unit tests, how to think about your application, and even through some common, non-trivial tasks like setting up external caching proxies like Varnish and authentication mechanisms like LDAP. In sum, this book teaches you how to be an independent and skillful Plone developer, capable of running your own company—if that is your goal—or provide scalable, maintainable services for your existing organization. Five years ago, I certainly wouldn't have imagined sitting here, jet-lagged and happy in Barcelona this Sunday morning after wrapping up a workshop to improve the multilingual components in Plone. Nor would I have expected to live halfway across the world in San Francisco and work for Google, and still have time to lead Plone into the future. Speaking of which, how does the future of Plone look like in 2007? Web development is now in a state we could only have dreamt about five years ago—and the rise of numerous great Python web frameworks, and even non-Python solutions like Ruby on Rails has made it possible for the Plone community to focus on what it excels at: content and document management, multilingual content, and solving real problems for real companies—and having fun in the process. Before these frameworks existed, people would often try to do things with Plone that it was not built or designed to do—and we are very happy that solutions now exist that cater to these audiences, so we can focus on our core expertise. Choice is good, and you should use the right tool for the job at hand. We are lucky to have Martin, and so are you. Enjoy the book, and I look forward to seeing you in our help forums, chat rooms, or at one of the many Plone conferences and workshops around the world. — Alexander Limi, Barcelona, July 2007 http://limi.net Alexander Limi co-founded the Plone project with Alan Runyan, and continues to play a key role in the Plone community. He is Plone's main user interface developer, and currently works as a user interaction designer at Google in California.
Read more
  • 0
  • 0
  • 5525

article-image-xpath-support-oracle-jdeveloper-xdk-11g
Packt
15 Oct 2009
11 min read
Save for later

XPath Support in Oracle JDeveloper - XDK 11g

Packt
15 Oct 2009
11 min read
With SAX and DOM APIs, node lists have to be iterated over to access a particular node. Another advantage of navigating an XML document with XPath is that an attribute node may be selected directly. With DOM and SAX APIs, an element node has to be selected before an element attribute can be selected. Here we will discuss XPath support in JDeveloper. What is XPath? XPath is a language for addressing an XML document's elements and attributes. As an example, say you receive an XML document that contains the details of a shipment and you want to retrieve the element/attribute values from the XML document. You don't just want to list the values of all the nodes, but also want to output the values of specific elements or attributes. In such a case, you would use XPath to retrieve the values of those elements and attributes. XPath constructs a hierarchical structure of an XML document, a tree of nodes, which is the XPath data model. The XPath data model consists of seven node types. The different types of nodes in the XPath data model are discussed in the following table: Node Type Description Root Node The root node is the root of the DOM tree. The document element (the root element) is a child of the root node. The root node also has the processing instructions and comments as child nodes. Element Node It represents an element in an XML document. The character data, elements, processing instruction, and comments within an element are the child nodes of the element node. Attribute Node It represents an attribute other than the valign="top"> Text Node The character data within an element is a text node. A text node has at least one character of data. A whitespace is also considered as a character of data.  By default, the ignorable whitespace after the end of an element and before the start of the following element is also a text node. The ignorable whitespace can be excluded from the DOM tree built by parsing an XML document. This can be done by setting the whitespace-preserving mode to false with the setPreserveWhitespace(boolean flag) method. Comment Node It represents a comment in an XML document, except the comments within the DOCTYPE declaration. Processing Instruction Node It represents a processing instruction in an XML document except the processing instruction within the DOCTYPE declaration. The XML declaration is not considered as a processing instruction node. Namespace Node It represents a namespace mapping, which consists of a . A namespace node consists of a namespace prefix (xsd in the example) and a namespace URI (http://www.w3.org/2001/XMLSchema in the example). Specific nodes including element, attribute, and text nodes may be accessed with XPath. XPath supports nodes in a namespace. Nodes in XPath are selected with an XPath expression. An expression is evaluated to yield an object of one of the following four types: node set, Boolean, number, or string. For an introduction on XPath refer to the W3C Recommendation for XPath (http://www.w3.org/TR/xpath). As a brief review, expression evaluation in XPath is performed with respect to a context node. The most commonly used type of expression in XPath is a location path . XPath defines two types of location paths: relative location paths and absolute location paths. A relative location path is defined with respect to a context node and consists of a sequence of one or more location steps separated by "/". A location step consists of an axis, a node test, and predicates. An example of a location step is: child::journal[position()=2] In the example, the child axis contains the child nodes of the context node. Node test is the journal node set, and predicate is the second node in the journal node set. An absolute location path is defined with respect to the root node, and starts with "/". The difference between a relative location path and an absolute location path is that a relative location path starts with a location step, and an absolute location path starts with "/". XPath in Oracle XDK 11g Oracle XML Developer's Kit 11g, which is included in JDeveloper, provides the DOMParser class to parse an XML document and construct a DOM structure of the XML document. An XMLDocument object represents the DOM structure of an XML document. An XMLDocument object may be retrieved from a DOMParser object after an XML document has been parsed. The XMLDocument class provides select methods to select nodes in an XML document with an XPath expression. In this article we shall parse an example XML document with the DOMParser class, obtain an XMLDocument object for the XML document, and select nodes from the document with the XMLDocument class select methods. The different select methods in theXMLDocument class are discussed in the following table: Method Name Description selectSingleNode(String XPathExpression) Selects a single node that matches an XPath expression. If more than one node matches the specified expression, the first node is selected. Use this method if you want to select the first node that matches an XPath expression. selectNodes(String XPathExpression) Selects a node list of nodes that match a specified XPath expression. Use this method if you want to select a collection of similar nodes. selectSingleNode(String XPathExpression, NSResolver resolver) Selects a single namespace node that matches a specified XPath expression. Use this method if the XML document has nodes in namespaces and you want to select the first node, which is in a namespace and matches an XPath expression. selectNodes(String XPathExpression, NSResolver resolver) Selects a node list of nodes that match a specified XPath expression. Use this method if you want to select a collection of similar nodes that are in a namespace. The example XML document that is parsed in this article has a namespace declaration for elements in the namespace with the prefix journal. For an introduction on namespaces in XML refer to the W3C Recommendation on Namespaces in XML 1.0 (http://www.w3.org/TR/REC-xml-names/). catalog.xml, the example XML document, is shown in the following listing: <?xml version="1.0" encoding="UTF-8"?><catalog title="Oracle Magazine" publisher="Oracle Publishing"><journal:journal journal_date="November-December 2008"> <journal:article journal_section="ORACLE DEVELOPER"> <title>Instant ODP.NET Deployment</title> <author>Mark A. Williams</author></journal:article><journal:article journal_section="COMMENT"> <title>Application Server Convergence</title> <author>David Baum</author> </journal:article></journal:journal><journal date="March-April 2008"> <article section="TECHNOLOGY"> <title>Oracle Database 11g Redux</title> <author>Tom Kyte</author> </article><article section="ORACLE DEVELOPER"> <title>Declarative Data Filtering</title> <author>Steve Muench</author> </article> </journal></catalog Setting the environment Create an application (called XPath, for example) and a project (called XPath) in JDeveloper. The XPath API will be demonstrated in a Java application. Therefore, create a Java class in the XPath project with File | New. In the New Gallery window select < >Categories | General and Items | Java Class. In the Create Java Class window, specify the class name (XPathParser, for example), the package name (xpath in the example application), and click on the OK button. To develop an application with XPath, add the required libraries to the project classpath. Select the project node in Application Navigator and select Tools | Project Properties. In the Project Properties window, select the Libraries and Classpath node. To add a library, select the Add Library button. Select the Oracle XML Parser v2 library. Click on the OK button in the Project Properties window. We also need to add an XML document that is to be parsed and navigated with XPath. To add an XML document, select File | New. In the New Gallery window, select Categories | General | XML and Items | XML Document. Click on the OK button. In the Create XML File window specify the file name catalog.xml in the File Name field, and click on the OK button. Copy the catalog.xml listing to the catalog.xml file in the Application Navigator. The directory structure of the XPath project is shown in the following illustration: XPath Search In this section, we shall select nodes from the example XML document, catalog.xml, with the XPath Search tool of JDeveloper 11g. The XPath Search tool consists of an Expression field for specifying an XPath expression. Specify an XPath expression and click on OK to select nodes matching the XPath expression. The XPath Search tool has the provision to search for nodes in a specific namespace. An XML namespace is a collection of element and attribute names that are identified by a URI reference. Namespaces are specified in an XML document using namespace declarations. A namespace declaration is an > To navigate catalog.xml with XPath, select catalog.xml in the Application Navigator and select Search | XPath Search. In the following subsections, we shall select example nodes using absolute location paths and relative location paths. Use a relative location path if the XML document is large and a specifi c node is required. Also, use a relative path if the node from which subnodes are to be selected and the relative location path are known. Use an absolute location path if the XML document is small, or if the relative location path is not known. The objective is to use minimum XPath navigation. Use the minimum number nodes to navigate in order to select the required node. Selecting nodes with absolute location paths Next, we shall demonstrate with various examples of selecting nodes using XPath. As an example, select all the title elements in catalog.xml. Specify the XPath expression for selecting the title elements in the Expression field of the Apply an XPath Expression on catalog.xml window. The XPath expression to select all title elements is /catalog/journal/article/title. Click on the OK button to select the title elements. The title elements get selected. Title elements from the journal:article elements in the journal namespace do not get selected because a namespace has not been applied to the XPath expression. As an other example, select the title element in the first article element using the XPath expression /catalog/journal/article[1]/title. We are not using namespaces yet. The XPath expression is specified in the Expression field. The title of the first article element gets selected as shown in the JDeveloper output: Attribute nodes may also be selected with XPath. Attributes are selected by using the "@" prefix. As an example, select the section attribute in the first article element in the journal element. The XPath expression for selecting the section attribute is /catalog/journal/article[1]/@section and is specified in the Expression field. Click on the OK button to select the section attribute. The attribute section gets outputted in JDeveloper. Selecting nodes with relative location paths In the previous examples, an absolute location is used to select nodes. Next, we shall demonstrate selecting an element with a relative location path. As an example, select the title of the first article element in the journal element. The relative location path for selecting the title element is child::catalog/journal/article[position()=1]/title. Specifying the axis as child and node test as catalog selects all the child nodes of the catalog node and is equivalent to an absolute location path that starts with /catalog. If the child nodes of the journal node were required to be selected, specify the node test as journal. Specify the XPath expression in the Expression field and click on the OK button. The title of the first article element in the journal element gets selected as shown here: Selecting namespace nodes XPath Search also has the provision to select elements and attributes in a namespace. To illustrate, select all the title elements in the journal element (that is, in the journal namespace) using the XPath expression /catalog/journal:journal/journal:article/title. First, add the namespaces of the elements and attributes to be selected in the Namespaces text area. Prefix and URI of namespaces are added with the Add button. Specify the prefix in the Prefix column, and the URI in the URI column. Multiple namespace mappings may be added. XPath expressions that select namespace nodes are similar to no-namespace expressions, except that the namespace prefixes are included in the expressions. Elements in the default namespace, which does not have a namespace prefix, are also considered to be in a namespace. Click on the OK button to select the nodes with XPath. The title elements in the journal element (in the journal namespace) get selected and outputted in JDeveloper. Attributes in a namespace may also be selected with XPath Search. As an example, select the section attributes in the journal namespace. Specify the XPath expression to select the section attributes in the Expression field and click on the OK button. Section attributes in the journal namespace get selected.
Read more
  • 0
  • 0
  • 5524

article-image-automating-performance-analysis-yslow-and-phantomjs
Packt
10 Jun 2014
12 min read
Save for later

Automating performance analysis with YSlow and PhantomJS

Packt
10 Jun 2014
12 min read
(For more resources related to this topic, see here.) Getting ready To run this article, the phantomjs binary will need to be accessible to the continuous integration server, which may not necessarily share the same permissions or PATH as our user. We will also need a target URL. We will use the PhantomJS port of the YSlow library to execute the performance analysis on our target web page. The YSlow library must be installed somewhere on the filesystem that is accessible to the continuous integration server. For our example, we have placed the yslow.js script in the tmp directory of the jenkins user's home directory. To find the jenkins user's home directory on a POSIX-compatible system, first switch to that user using the following command: sudo su - jenkins Then print the home directory to the console using the following command: echo $HOME We will need to have a continuous integration server set up where we can configure the jobs that will execute our automated performance analyses. The example that follows will use the open source Jenkins CI server. Jenkins CI is too large a subject to introduce here, but this article does not assume any working knowledge of it. For information about Jenkins CI, including basic installation or usage instructions, or to obtain a copy for your platform, visit the project website at http://jenkins-ci.org/. Our article uses version 1.552. The combination of PhantomJS and YSlow is in no way unique to Jenkins CI. The example aims to provide a clear illustration of automated performance testing that can easily be adapted to any number of continuous integration server environments. The article also uses several plugins on Jenkins CI to help facilitate our automated testing. These plugins include: Environment Injector Plugin JUnit Attachments Plugin TAP Plugin xUnit Plugin To run that demo site, we must have Node.js installed. In a separate terminal, change to the phantomjs-sandbox directory (in the sample code's directory), and start the app with the following command: node app.js How to do it… To execute our automated performance analyses in Jenkins CI, the first thing that we need to do is set up the job as follows: Select the New Item link in Jenkins CI. Give the new job a name (for example, YSlow Performance Analysis), select Build a free-style software project, and then click on OK. To ensure that the performance analyses are automated, we enter a Build Trigger for the job. Check off the appropriate Build Trigger and enter details about it. For example, to run the tests every two hours, during business hours, Monday through Friday, check Build periodically and enter the Schedule as H 9-16/2 * * 1-5. In the Build block, click on Add build step and then click on Execute shell. In the Command text area of the Execute Shell block, enter the shell commands that we would normally type at the command line, for example: phantomjs ${HOME}/tmp/yslow.js -i grade -threshold "B" -f junit http ://localhost:3000/css-demo > yslow.xml In the Post-build Actions block, click on Add post-build action and then click on Publish JUnit test result report. In the Test report XMLs field of the Publish JUnit Test Result Report block, enter *.xml. Lastly, click on Save to persist the changes to this job. Our performance analysis job should now run automatically according to the specified schedule; however, we can always trigger it manually by navigating to the job in Jenkins CI and clicking on Build Now. After a few of the performance analyses have completed, we can navigate to those jobs in Jenkins CI and see the results shown in the following screenshots: The landing page for a performance analysis project in Jenkins CI Note the Test Result Trend graph with the successes and failures. The Test Result report page for a specific build Note that the failed tests in the overall analysis are called out and that we can expand specific items to view their details. The All Tests view of the Test Result report page for a specific build Note that all tests in the performance analysis are listed here, regardless of whether they passed or failed, and that we can click into a specific test to view its details. How it works… The driving principle behind this article is that we want our continuous integration server to periodically and automatically execute the YSlow analyses for us so that we can monitor our website's performance over time. This way, we can see whether our changes are having an effect on overall site performance, receive alerts when performance declines, or even fail builds if we fall below our performance threshold. The first thing that we do in this article is set up the build job. In our example, we set up a new job that was dedicated to the YSlow performance analysis task. However, these steps could be adapted such that the performance analysis task is added onto an existing multipurpose job. Next, we configured when our job will run, adding Build Trigger to run the analyses according to a schedule. For our schedule, we selected H 9-16/2 * * 1-5, which runs the analyses every two hours, during business hours, on weekdays. While the schedule that we used is fine for demonstration purposes, we should carefully consider the needs of our project—chances are that a different Build Trigger will be more appropriate. For example, it may make more sense to select Build after other projects are built, and to have the performance analyses run only after the new code has been committed, built, and deployed to the appropriate QA or staging environment. Another alternative would be to select Poll SCM and to have the performance analyses run only after Jenkins CI detects new changes in source control. With the schedule configured, we can apply the shell commands necessary for the performance analyses. As noted earlier, the Command text area accepts the text that we would normally type on the command line. Here we type the following: phantomjs: This is for the PhantomJS executable binary ${HOME}/tmp/yslow.js: This is to refer to the copy of the YSlow library accessible to the Jenkins CI user -i grade: This is to indicate that we want the "Grade" level of report detail -threshold "B": This is to indicate that we want to fail builds with an overall grade of "B" or below -f junit: This is to indicate that we want the results output in the JUnit format http://localhost:3000/css-demo: This is typed in as our target URL > yslow.xml: This is to redirect the JUnit-formatted output to that file on the disk What if PhantomJS isn't on the PATH for the Jenkins CI user? A relatively common problem that we may experience is that, although we have permission on Jenkins CI to set up new build jobs, we are not the server administrator. It is likely that PhantomJS is available on the same machine where Jenkins CI is running, but the jenkins user simply does not have the phantomjs binary on its PATH. In these cases, we should work with the person administering the Jenkins CI server to learn its path. Once we have the PhantomJS path, we can do the following: click on Add build step and then on Inject environment variables; drag-and-drop the Inject environment variables block to ensure that it is above our Execute shell block; in the Properties Content text area, apply the PhantomJS binary's path to the PATH variable, as we would in any other script as follows: PATH=/path/to/phantomjs/bin:${PATH} After setting the shell commands to execute, we jump into the Post-build Actions block and instruct Jenkins CI where it can find the JUnit XML reports. As our shell command is redirecting the output into a file that is directly in the workspace, it is sufficient to enter an unqualified *.xml here. Once we have saved our build job in Jenkins CI, the performance analyses can begin right away! If we are impatient for our first round of results, we can click on Build Now for our job and watch as it executes the initial performance analysis. As the performance analyses are run, Jenkins CI will accumulate the results on the filesystem, keeping them until they are either manually removed or until a discard policy removes old build information. We can browse these accumulated jobs in the web UI for Jenkins CI, clicking on the Test Result link to drill into them. There's more… The first thing that bears expanding upon is that we should be thoughtful about what we use as the target URL for our performance analysis job. The YSlow library expects a single target URL, and as such, it is not prepared to handle a performance analysis job that is otherwise configured to target two or more URLs. As such, we must select a strategy to compensate for this, for example: Pick a representative page: We could manually go through our site and select the single page that we feel best represents the site as a whole. For example, we could pick the page that is "most average" compared to the other pages ("most will perform at about this level"), or the page that is most likely to be the "worst performing" page ("most pages will perform better than this"). With our representative page selected, we can then extrapolate performance for other pages from this specimen. Pick a critical page: We could manually select the single page that is most sensitive to performance. For example, we could pick our site's landing page (for example, "it is critical to optimize performance for first-time visitors"), or a product demo page (for example, "this is where conversions happen, so this is where performance needs to be best"). Again, with our performance-sensitive page selected, we can optimize the general cases around the specific one. Set up multiple performance analysis jobs: If we are not content to extrapolate site performance from a single specimen page, then we could set up multiple performance analysis jobs—one for each page on the site that we want to test. In this way, we could (conceivably) set up an exhaustive performance analysis suite. Unfortunately, the results will not roll up into one; however, once our site is properly tuned, we need to only look for the telltale red ball of a failed build in Jenkins CI. The second point worth considering is—where do we point PhantomJS and YSlow for the performance analysis? And how does the target URL's environment affect our interpretation of the results? If we are comfortable running our performance analysis against our production deploys, then there is not much else to discuss—we are assessing exactly what needs to be assessed. But if we are analyzing performance in production, then it's already too late—the slow code has already been deployed! If we have a QA or staging environment available to us, then this is potentially better; we can deploy new code to one of these environments for integration and performance testing before putting it in front of the customers. However, these environments are likely to be different from production despite our best efforts. For example, though we may be "doing everything else right", perhaps our staging server causes all traffic to come back from a single hostname, and thus, we cannot properly mimic a CDN, nor can we use cookie-free domains. Do we lower our threshold grade? Do we deactivate or ignore these rules? How can we tell apart the false negatives from the real warnings? We should put some careful thought into this—but don't be disheartened—better to have results that are slightly off than to have no results at all! Using TAP format If JUnit formatted results turn out to be unacceptable, there is also a TAP plugin for Jenkins CI. Test Anything Protocol (TAP) is a plain text-based report format that is relatively easy for both humans and machines to read. With the TAP plugin installed in Jenkins CI, we can easily configure our performance analysis job to use it. We would just make the following changes to our build job: In the Command text area of our Execute shell block, we would enter the following command: phantomjs ${HOME}/tmp/yslow.js -i grade -threshold "B" -f tap http ://localhost:3000/css-demo > yslow.tap In the Post-build Actions block, we would select Publish TAP Results instead of Publish JUnit test result report and enter yslow.tap in the Test results text field. Everything else about using TAP instead of JUnit-formatted results here is basically the same. The job will still run on the schedule we specify, Jenkins CI will still accumulate test results for comparison, and we can still explore the details of an individual test's outcomes. The TAP plugin adds an additional link in the job for us, TAP Extended Test Results, as shown in the following screenshot: One thing worth pointing out about using TAP results is that it is much easier to set up a single job to test multiple target URLs within a single website. We can enter multiple tests in the Execute Shell block (separating them with the && operator) and then set our Test Results target to be *.tap. This will conveniently combine the results of all our performance analyses into one. Summary In this article, we saw setting up of an automated performance analysis task on a continuous integration server (for example, Jenkins CI) using PhantomJS and the YSlow library. Resources for Article: Further resources on this subject: Getting Started [article] Introducing a feature of IntroJs [article] So, what is Node.js? [article]
Read more
  • 0
  • 0
  • 5514

article-image-moodle-plugins
Packt
25 Oct 2011
8 min read
Save for later

Moodle Plugins

Packt
25 Oct 2011
8 min read
  (For more resources on Moodle, see here.) There are a number of additional plugin types; namely, Enrolments, Authentication, Message outputs, Licences, and Web services. Plugins—an overview Moodle plugins are modules that provide some specific, usually ring-fenced, functionality. You can access the plugins area via the Plugins menu that is shown in the following screenshot:   Plugins overview displays a list of all installed plugins. The information shown for each plugin includes the Plugin name, an internal Identifier, its Source (Standard or Extension), its Version (in date format), its Availability (enabled or disabled), a link to the plugin Settings, and an option to Uninstall the plugin. The table is useful to get a quick overview of what has been installed on your system and what functionality is available. Some areas contain a significant number of plugins, for instance, Authentication and Portfolios. Other categories only contain one or two plugins. The expectation is that more plugins will be developed in the future, either as part of the Moodle's core or by third-party developers. This guarantees the extensibility of Moodle without the need to change the system itself. Be careful when modifying settings in any of the plugins. Inappropriate values can cause problems throughout the system. The last plugin type in the preceding screenshot is labeled Local plugins. This is the recommended place for any local customizations. These customizations can be changes to existing functionality or the introduction of new features. For more information about local plugins, check out the readme.txt file in the local directory in your dirroot. Module plugins Moodle distinguishes between three types of module plugins that are used in the courses—the front page (which is treated as a course), the My Moodle page, and the user profile pages: Activities modules (which also covers resources) Blocks Filters   Activities modules Navigating to Plugins | Activity modules | Manage activities displays the following screen: The table displays the following information: Column Description Activity module Icon and name of the activity/resource as they appear in courses and elsewhere. Activities The number of times the activity module is used in Moodle. When you click on the number, a table, which displays the courses in which the activity module has been used, is shown. Version Version of the activity module (format YYYYMMDDHH). Hide/Show The opened eye indicates that the activity module is available for use, while the closed eye indicates that it is hidden (unavailable). Delete Performs delete action. All activities, except the Forum activity, can be deleted. Settings Link to activity module settings (not available for all items). Clicking on the Show/Hide icon toggles its state; if it is hidden it will be changed to be shown and vice versa. If an activity module is hidden, it will not appear in the Add an activity or Add a resource drop-down menu in any Moodle course. Hidden activities and resources that are already present in courses are hidden but are still in the system. It means that, once the activity module is visible again, the items will also re-appear in courses. You can delete any Moodle Activity module (except the Forum activity). If you delete an activity or resource that has been used anywhere in Moodle, all the already-created activity modules will also be deleted and so will any associated user data! Deleting an activity module cannot be undone; it has to be installed from scratch. It is highly recommended not to delete any activity modules unless you are 100 percent sure that you will never need them again! If you wish to prevent usage of an activity or resource type, it is better to hide it instead of deleting it. The Feedback activity has been around for some time as a third-party add-on. It is hidden by default because it has been newly introduced in the core of Moodle 2, due to its popularity. You might probably want to make this available for your teachers. The settings are different for each activity module. For example, the settings for the Assignment module only contain three parameters, whereas the settings for the Quiz module allow the modification of a wide range of parameters. The settings for Moodle Activity modules are not covered here, as they are mostly self-explanatory and also dealt with in great detail in the Moodle Docs of the respective modules. It is further expected that the activity modules will undergo a major overhaul in the 2.x versions to come, making any current explanations obsolete. Configuration of blocks Navigating to Plugins | Blocks | Manage blocks displays a table, as shown in the screenshot that follows. It displays the same type of information as for Activity modules. Some blocks allow multiple instances, that is, the block can be used more than once on a page. For example, you can only have one calendar, whereas you can have as many Remote RSS Feeds as you wish. You cannot control this behavior, as it is controlled by the block itself. You can delete any Moodle block. If you delete a block that is used anywhere in Moodle, all the already-created content will also be deleted. Deleting a block cannot be undone; it has to be installed from scratch. Do not delete or hide the Settings block, as you will not be able to access any system settings anymore! Also, do not delete or hide the Navigation block, as users will not be able to access a variety of pages. Most blocks are shown by default (except the Feedback and Global search blocks). Some blocks require additional settings to be set elsewhere for the block to function. For example, RSS feeds and tags have to be enabled in Advanced features, the Feedback activity module has to be shown, or global search has to be enabled (via Development | Experimental | Experimental settings).   The parameters of all standard Moodle blocks are explained in the respective Moodle Docs pages. Configuration of filters Filters scan any text that has been entered via the Moodle HTML editor and automatically transform it into different, often more complex, forms. For example, entries or concepts in glossaries are automatically hyperlinked in text, URLs pointing to MP3 or other audio files become embedded, flash-based controls (that offer pause and rewind functionality) appear, uploaded videos are given play controls, and so on. Moodle ships with 12 filters, which are accessed via Plugins | Filters | Manage filters:   By default, all filters are disabled. You can enable them by changing the Active? status to On or Off, but available. If the status is set to On, it means that the filter is activated throughout the system, but can be de-activated locally. If the status is set to Off, but available, it means that the filter is not activated, but can be enabled locally. In the preceding screenshot, the Multimedia plugins and Display emoticons as images (smileys) filters have been turned On and will be used throughout the system, as they are very popular. The TeX notation and Glossary auto-linking filters are available, but have to be activated locally. The former is only of use to the users who deal with mathematical or scientific notation and will trigger the Insert equation button in the Moodle editor. The Glossary auto-linking filter might be used in some courses. It can then be switched off temporarily at activity module level when learners have to appear for an exam. Additionally, you can change the order in which the filters are applied to text, using the up and down arrows. The filtering mechanism operates on a first-come, firstserved basis, that is, if a filter detects a text element that has to be transformed, it will do so before the next filter is applied. Each filter can be configured to be applied to Content and headings or Content only, that is, filters will be ignored in names of activity modules. The settings of some filters are described in detail in the Moodle Docs. As with activities and blocks, it is recommended to hide filters if you don't require them on your site. In addition to the filter-specific settings, Moodle provides a number of settings that are shared among all filters. These settings are accessed via the Filters | Common filters menu and are shown in the following screenshot: Setting Description Text cache lifetime It is the time for which Moodle keeps text to be filtered in a dedicated cache. Filter uploaded files By default, only text entered via the Moodle editor is filtered. If you wish to include uploaded files, you can choose any one from the HTML files only and All files options. Filter match once per page Enable this setting if the filter should stop analyzing text after it finds a match, that is, only the first occurrence will be transformed. Filter match once per text Enable this setting if the filter should only generate a single link for the first matching text instance found in each item of text on a page. This setting is ignored if the Filter match once per page parameter is enabled.  
Read more
  • 0
  • 0
  • 5458

article-image-enhancing-page-elements-moodle-and-javascript
Packt
04 May 2011
7 min read
Save for later

Enhancing Page Elements with Moodle and JavaScript

Packt
04 May 2011
7 min read
  Moodle JavaScript Cookbook Over 50 recipes for making your Moodle system more dynamic and responsive with JavaScript Introduction The Yahoo! UI Library (YUI) offers a range of widgets and utilities to bring modern enhancements to your traditional page elements. In this chapter, we will look at a selection of these enhancements, including features often seen on modern interactive interfaces, such as: Auto-complete: This feature suggests possible values to the user by searching against a list of suggestions as they start typing. We will look at two different ways of using this. First, by providing suggestions as the user types into a text box, and second, by providing a list of possible values for them to select from a combo list box. Auto-update: This technique will allow us to update an area of the page based on a timed interval, which has many uses as we'll see. In this example, we will look at how to create a clock by updating the time on the page at one second intervals. This technique could also be used, for example, to update a news feed every minute, or update stock information every hour. Resizable elements: A simple enhancement which allows users to dynamically resize elements to suit their needs. This could be applied to elements containing a significant amount of text which would allow the user to control the width of the text to suit their personal preference for ideal readability. Custom tooltips: Tooltips appear when an element is hovered, displaying the associated title or alternative text (that is, a description of an image or the title of a hyperlink). This enhancement allows us to have more control over the look of the tooltips making them more visually appealing and more consistent with the overall look and feel of our page. Custom buttons: This enhancement allows us to completely restyle button elements, modifying their look and feel to be consistent with the rest of our page. This also allows us to have a consistent button style across different platforms and web browsers. We will once again be using mostly YUI Version 2 widgets and utilities within the YUI Version 3 framework. At the time of writing, few YUI2 widgets have been ported to YUI3. This method allows us the convenience of the improvements afforded by the YUI3 environment combined with the features of the widgets from YUI2. Adding a text box with auto-complete A common feature of many web forms is the ability to provide suggestions as the user types, based on a list of possible values. In this example, we enhance a standard HTML input text element with this feature. This technique is useful in situations where we simply wish to offer suggestions to the user that they may or may not choose to select, that is, suggesting existing tags to be applied to a new blog post. They can either select a suggested value that matches one they have started typing, or simply continue typing a new, unused tag. How to do it... First, set up a basic Moodle page in the usual way. In this example, we create autocomplete.php with the following content: <?php require_once(dirname(__FILE__) . '/../config.php'); $PAGE->set_context(get_context_instance(CONTEXT_SYSTEM)); $PAGE->set_url('/cook/autocomplete.php'); $PAGE->requires->js('/cook/autocomplete.js', true); ?> <?php echo $OUTPUT->header(); ?> <div style="width:15em;height:10em;"> <input id="txtInput" type="text"> <div id="txtInput_container"></div> </div> <?php echo $OUTPUT->footer(); ?> Secondly, we need to create our associated JavaScript file, autocomplete.js, with the following code: YUI().use("yui2-autocomplete", "yui2-datasource", function(Y) { var YAHOO = Y.YUI2; var dataSource = new YAHOO.util.LocalDataSource ( ["Alpha","Bravo","Beta","Gamma","Golf"] ); var autoCompleteText = new YAHOO.widget.AutoComplete("txtInput", "txtInput_container", dataSource); }); How it works... Our HTML consists of three elements, a parent div element, and the other two elements contained within it: an input text box, and a placeholder div element to use to display the auto-complete suggestions. Our JavaScript file then defines a DataSource to be used to provide suggestions, and then creates a new AutoComplete widget based on the HTML elements we have already defined. In this example, we used a LocalDataSource for simplicity, but this may be substituted for any valid DataSource object. Once we have a DataSource object available, we instantiate a new YUI2 AutoComplete widget, passing the following arguments: The name of the HTML input text element for which to provide auto-complete suggestions The name of the container element to use to display suggestions A valid data source object to use to find suggestions Now when the user starts typing into the text box, any matching auto-complete suggestions are displayed and can be selected, as shown in the following screenshot: Adding a combo box with auto-complete In this example, we will use auto-complete in conjunction with a combo box (drop-down list). This differs from the previous example in one significant way—it includes a drop-down arrow button that allows the user to see the complete list of values without typing first. This is useful in situations where the user may be unsure of a suitable value. In this case, they can click the drop-down button to see suggestions without having to start guessing as they type. Additionally, this method also supports the same match-as-you-type style auto-complete as that of the previous recipe. How to do it... Open the autocomplete.php file from the previous recipe for editing, and add the following HTML below the text box based auto-complete control: <div style="width:15em;height:10em;"> <input id="txtCombo" type="text" style="vertical-align:top; position:static;width:11em;"><span id="toggle"></span> <div id="txtCombo_container"></div> </div> Next, open the JavaScript file autocomplete.js, and modify it to match the following code: YUI().use("yui2-autocomplete", "yui2-datasource", "yui2-element", "yui2-button", "yui2-yahoo-dom-event", function(Y) { var YAHOO = Y.YUI2; var dataSource = new YAHOO.util.LocalDataSource ( ["Alpha","Bravo","Beta","Gamma","Golf"] ); var autoCompleteText = new YAHOO.widget.AutoComplete("txtInput", "txtInput_container", dataSource); var autoCompleteCombo = new YAHOO.widget.AutoComplete("txtCombo", "txtCombo_container", dataSource, {minQueryLength: 0, queryDelay: 0}); var toggler = YAHOO.util.Dom.get("toggle"); var tButton = new YAHOO.widget.Button({container:toggler, label:"↓"}); var toggle = function(e) { if(autoCompleteCombo.isContainerOpen()) { autoCompleteCombo.collapseContainer(); } else { autoCompleteCombo.getInputEl().focus(); setTimeout(function() { autoCompleteCombo.sendQuery(""); },0); } } tButton.on("click", toggle); }); You will notice that the HTML we added in this recipe is very similar to the last, with the exception that we included a span element just after the text box. This is used as a placeholder to insert a YUI2 button control. This recipe is somewhat more complicated than the previous one, so we included some extra YUI2 modules: element, button, and yahoo-dom-event. We define the AutoComplete widget in the same way as before, except we need to add two configuration options in an object passed as the fourth argument. Next, we retrieve a reference to the button placeholder, and instantiate a new Button widget to use as the combo box 'drop-down' button. Finally, we define a click handler for the button, and register it. We now see the list of suggestions, which can be displayed by clicking on the drop-down button, as shown in the following screenshot: How it works... The user can type into the box to receive auto-complete suggestions as before, but may now use the combo box style drop-down button instead to see a list of suggestions. When the user clicks the drop-down button, the click event is fired. This click event does the following: Hides the drop-down menu if it is displayed, which allows the user to toggle this list display on/off. If it is not displayed, it sets the focus to the text box (to allow the user to continue typing), and execute a blank query on the auto-complete widget, which will display the list of suggestions. Note that we explicitly enabled this blank query earlier when we defined the AutoComplete widget with the "minQueryLength: 0" option, which allowed queries of length 0 and above.  
Read more
  • 0
  • 0
  • 5449
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-getting-started-modernizr-using-php-ide
Packt
30 Apr 2013
5 min read
Save for later

Getting started with Modernizr using PHP IDE

Packt
30 Apr 2013
5 min read
(For more resources related to this topic, see here.) From the Modernizr website: Modernizr is a small JavaScript library that detects the availability of native implementations for next-generation web technologies, i.e. features that stem from the HTML5 and CSS3 specifications. Many of these features are already implemented in at least one major browser (most of them in two or more), and what Modernizr does is, very simply, tell you whether the current browser has this feature natively implemented or not. Basically with this library, we can see if the user's browser can support certain features you wish to use on your site. This is important to do, as unfortunately not every browser is created the same. Each one has its own implementation of the HTML5 standard, so some features may be available on Google Chrome but not on Internet Explorer. Using Modernizr is a better alternative to the standard, but it is unreliable, user agent (UA) string checking. Let's begin. Getting ready Go ahead and create a new Web Project in Aptana Studio. Once it is set up, go ahead and add a new folder to the project named js. Next thing we need to do is to download the Development Version of Mondernizr from the Modernizr download page (http://modernizr.com/download/). You will see options to build your own package. The development version will do until you are ready for production use. As of this writing, the latest version is 2.6.2 and that will be the version we use. Place the downloaded file into the js folder. How to do it... Follow these steps: For this exercise, we will simply do a browser test to see if your browser currently supports the HTML5 Canvas element. Type this into a JavaScript file named canvas.js and add the following code: if (Modernizr.canvas) { var c=document.getElementById("canvastest"); var ctx=c.getContext("2d"); // Create gradient Var grd=ctx.createRadialGradient(75,50,5,90,60,100); grd.addColorStop(0,"black"); grd.addColorStop(1,"white"); // Fill with gradient ctx.fillStyle=grd; ctx.fillRect(10,10,150,80); alert("We can use the Canvas element!"); } else { alert("Canvas Element Not Supported"); } Now add the following to index.html: <!DOCTYPE html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Canvas Support Test</title> <script src = "js/modernizr-latest.js" type="text/ javascript"></script> </head> <body> <canvas id="canvastest" width="200" height="100" style="border:1px solid #000000">Your browser does not support the HTML5 canvas tag.</canvas> <script src = "js/canvas.js"> </script> </body> </html> Let's preview the code and see what we got. The following screenshot is what you should see: How it works... What did we just do? Well, let's break it down: <script src = "js/modernizr-latest.js" type="text/javascript"></script> Here, we are calling in our Modernizr library that we downloaded previously. Once you do that, Modernizr does some things to your page. It will redo your opening <html> tag to something like the following (from Google Chrome): <html class=" js flexbox flexboxlegacy canvas canvastext webgl notouch geolocation postmessage websqldatabase indexeddb hashchange history draganddrop websockets rgba hsla multiplebgs backgroundsize borderimage borderradius boxshadow textshadow opacity cssanimations csscolumns cssgradients cssreflections csstransforms csstransforms3d csstransitions fontface generatedcontent video audio localstorage sessionstorage webworkers applicationcache svg inlinesvg smil svgclippaths"> This is all the features your browser supports that Modernizr was able to detect. Next up we have our <canvas> element: <canvas id="canvastest" width="200" height="100" style="border:1px solid #000000">Your browser does not support the HTML5 canvas tag.</ canvas> Here, we are just forming a basic canvas that is 200 x 100 with a black border going around it. Now for the good stuff in our canvas.js file, follow this code snippet: <script> if (Modernizr.canvas) { alert("We can use the Canvas element!"); var c=document.getElementById("canvastest"); var ctx=c.getContext("2d"); // Create gradient var grd=ctx.createRadialGradient(75,50,5,90,60,100); grd.addColorStop(0,"black"); grd.addColorStop(1,"white"); // Fill with gradient ctx.fillStyle=grd; ctx.fillRect(10,10,150,80); } else { alert("Canvas Element Not Supported"); } </script> In the first part of this snippet, we used an if statement to see if the browser supports the Canvas element. If it does support canvas, then we are displaying a JavaScript alert and then filling our canvas element with a black gradient. After that, we have our else statement that will alert the user that canvas is not supported on their browser. They will also see the Your browser does not support the HTML5 canvas tag message. That wasn't so bad, was it? There's more... I highly recommend reading over the documentation on the Modernizr website so that you can see all the feature tests you can do with this library. We will do a few more practice examples with Modernizr, and of course, it will be a big component of our RESS project later on in the book. Keeping it efficient For a production environment, I highly recommend taking the build-a-package approach and only downloading a script that contains the tests you will actually use. This way your script is as small as possible. As of right now, the file we used has every test in it; some you may never use. So, to be as efficient as possible (and we want all the efficiency we can get in mobile development), build your file with the tests you'll use or may use. Summary This article provided guidelines on creating a new Web Project in Aptana Studio, creating new folder to the project named js, downloading the Development Version of Mondernizr from the Modernizr download page, and placing the downloaded file into the js folder. Resources for Article : Further resources on this subject: Let's Chat [Article] Blocking versus Non blocking scripts [Article] Building Applications with Spring Data Redis [Article]
Read more
  • 0
  • 0
  • 5445

article-image-less-external-applications-and-frameworks
Packt
30 Apr 2015
11 min read
Save for later

Less with External Applications and Frameworks

Packt
30 Apr 2015
11 min read
In this article by Bass Jobsen, author of the book Less Web Development Essentials - Second Edition, we will cover the following topics: WordPress and Less Using Less with the Play framework, AngularJS, Meteor, and Rails (For more resources related to this topic, see here.) WordPress and Less Nowadays, WordPress is not only used for weblogs, but it can also be used as a content management system for building a website. The WordPress system, written in PHP, has been split into the core system, plugins, and themes. The plugins add additional functionalities to the system, and the themes handle the look and feel of a website built with WordPress. They work independently of each other and are also independent of the theme. The theme does not depend on plugins. WordPress themes define the global CSS for a website, but every plugin can also add its own CSS code. The WordPress theme developers can use Less to compile the CSS code of the themes and the plugins. Using the Sage theme by Roots with Less Sage is a WordPress starter theme. You can use it to build your own theme. The theme is based on HTML5 Boilerplate (http://html5boilerplate.com/) and Bootstrap. Visit the Sage theme website at https://roots.io/sage/. Sage can also be completely built using Gulp. More information about how to use Gulp and Bower for the WordPress development can be found at https://roots.io/sage/docs/theme-development/. After downloading Sage, the Less files can be found at assets/styles/. These files include Bootstrap's Less files. The assets/styles/main.less file imports the main Bootstrap Less file, bootstrap.less. Now, you can edit main.less to customize your theme. You will have to rebuild the Sage theme after the changes you make. You can use all of the Bootstrap's variables to customize your build. JBST with a built-in Less compiler JBST is also a WordPress starter theme. JBST is intended to be used with the so-called child themes. More information about the WordPress child themes can be found at https://codex.wordpress.org/Child_Themes. After installing JBST, you will find a Less compiler under Appearance in your Dashboard pane, as shown in the following screenshot: JBST's built-in Less compiler in the WordPress Dashboard The built-in Less compiler can be used to fully customize your website using Less. Bootstrap also forms the skeleton of JBST, and the default settings are gathered by the a11y bootstrap theme mentioned earlier. JBST's Less compiler can be used in the following different ways: First, the compiler accepts any custom-written Less (and CSS) code. For instance, to change the color of the h1 elements, you should simply edit and recompile the code as follows: h1 {color: red;} Secondly, you can edit Bootstrap's variables and (re)use Bootstrap's mixins. To set the background color of the navbar component and add a custom button, you can use the code block mentioned here in the Less compiler: @navbar-default-bg:             blue; .btn-colored { .button-variant(blue;red;green); } Thirdly, you can set JBST's built-in Less variables as follows: @footer_bg_color: black; Lastly, JBST has its own set of mixins. To set a custom font, you can edit the code as shown here: .include-custom-font(@family: arial,@font-path, @path:   @custom-font-dir, @weight: normal, @style: normal); In the preceding code, the parameters mentioned were used to set the font name (@family) and the path name to the font files (@path/@font-path). The @weight and @style parameters set the font's properties. For more information, visit https://github.com/bassjobsen/Boilerplate-JBST-Child-Theme. More Less code blocks can also be added to a special file (wpless2css/wpless2css.less or less/custom.less); these files will give you the option to add, for example, a library of prebuilt mixins. After adding the library using this file, the mixins can also be used with the built-in compiler. The Semantic UI WordPress theme The Semantic UI, as discussed earlier, offers its own WordPress plugin. The plugin can be downloaded from https://github.com/ProjectCleverWeb/Semantic-UI-WordPress. After installing and activating this theme, you can use your website directly with the Semantic UI. With the default setting, your website will look like the following screenshot: Website built with the Semantic UI WordPress theme WordPress plugins and Less As discussed earlier, the WordPress plugins have their own CSS. This CSS will be added to the page like a normal style sheet, as shown here: <link rel='stylesheet' id='plugin-name'   href='//domain/wp-content/plugin-name/plugin-name.css?ver=2.1.2'     type='text/css' media='all' /> Unless a plugin provides the Less files for their CSS code, it will not be easy to manage its styles with Less. The WP Less to CSS plugin The WP Less to CSS plugin, which can be found at http://wordpress.org/plugins/wp-less-to-css/, offers the possibility of styling your WordPress website with Less. As seen earlier, you can enter the Less code along with the built-in compiler of JBST. This code will then be compiled into the website's CSS. This plugin compiles Less with the PHP Less compiler, Less.php. Using Less with the Play framework The Play framework helps you in building lightweight and scalable web applications by using Java or Scala. It will be interesting to learn how to integrate Less with the workflow of the Play framework. You can install the Play framework from https://www.playframework.com/. To learn more about the Play framework, you can also read, Learning Play! Framework 2, Andy Petrella, Packt Publishing. To read Petrella's book, visit https://www.packtpub.com/web-development/learning-play-framework-2. To run the Play framework, you need JDK 6 or later. The easiest way to install the Play framework is by using the Typesafe activator tool. After installing the activator tool, you can run the following command: > activator new my-first-app play-scala The preceding command will install a new app in the my-first-app directory. Using the play-java option instead of the play-scala option in the preceding command will lead to the installation of a Java-based app. Later on, you can add the Scala code in a Java app or the Java code in a Scala app. After installing a new app with the activator command, you can run it by using the following commands: cd my-first-app activator run Now, you can find your app at http://localhost:9000. To enable the Less compilation, you should simply add the sbt-less plugin to your plugins.sbt file as follows: addSbtPlugin("com.typesafe.sbt" % "sbt-less" % "1.0.6") After enabling the plugin, you can edit the build.sbt file so as to configure Less. You should save the Less files into app/assets/stylesheets/. Note that each file in app/assets/stylesheets/ will compile into a separate CSS file. The CSS files will be saved in public/stylesheets/ and should be called in your templates with the HTML code shown here: <link rel="stylesheet"   href="@routes.Assets.at("stylesheets/main.css")"> In case you are using a library with more files imported into the main file, you can define the filters in the build.sbt file. The filters for these so-called partial source files can look like the following code: includeFilter in (Assets, LessKeys.less) := "*.less" excludeFilter in (Assets, LessKeys.less) := "_*.less" The preceding filters ensure that the files starting with an underscore are not compiled into CSS. Using Bootstrap with the Play framework Bootstrap is a CSS framework. Bootstrap's Less code includes many files. Keeping your code up-to-date by using partials, as described in the preceding section, will not work well. Alternatively, you can use WebJars with Play for this purpose. To enable the Bootstrap WebJar, you should add the code shown here to your build.sbt file: libraryDependencies += "org.webjars" % "bootstrap" % "3.3.2" When using the Bootstrap WebJar, you can import Bootstrap into your project as follows: @import "lib/bootstrap/less/bootstrap.less"; AngularJS and Less AngularJS is a structural framework for dynamic web apps. It extends the HTML syntax, and this enables you to create dynamic web views. Of course, you can use AngularJS with Less. You can read more about AngularJS at https://angularjs.org/. The HTML code shown here will give you an example of what repeating the HTML elements with AngularJS will look like: <!doctype html> <html ng-app> <head>    <title>My Angular App</title> </head> <body ng-app>      <ul>      <li ng-repeat="item in [1,2,3]">{{ item }}</li>    </ul> <script   src="https://ajax.googleapis.com/ajax/libs/angularjs/1.3.12/&    angular.min.js"></script> </body> </html> This code should make your page look like the following screenshot: Repeating the HTML elements with AngularJS The ngBoilerplate system The ngBoilerplate system is an easy way to start a project with AngularJS. The project comes with a directory structure for your application and a Grunt build process, including a Less task and other useful libraries. To start your project, you should simply run the following commands on your console: > git clone git://github.com/ngbp/ngbp > cd ngbp > sudo npm -g install grunt-cli karma bower > npm install > bower install > grunt watch And then, open ///path/to/ngbp/build/index.html in your browser. After installing ngBoilerplate, you can write the Less code into src/less/main.less. By default, only src/less/main.less will be compiled into CSS; other libraries and other codes should be imported into this file. Meteor and Less Meteor is a complete open-source platform for building web and mobile apps in pure JavaScript. Meteor focuses on fast development. You can publish your apps for free on Meteor's servers. Meteor is available for Linux and OS X. You can also install it on Windows. Installing Meteor is as simple as running the following command on your console: > curl https://install.meteor.com | /bin/sh You should install the Less package for compiling the CSS code of the app with Less. You can install the Less package by running the command shown here: > meteor add less Note that the Less package compiles every file with the .less extension into CSS. For each file with the .less extension, a separate CSS file is created. When you use the partial Less files that should only be imported (with the @import directive) and not compiled into the CSS code itself, you should give these partials the .import.less extension. When using the CSS frameworks or libraries with many partials, renaming the files by adding the .import.less extension will hinder you in updating your code. Also running postprocess tasks for the CSS code is not always possible. Many packages for Meteor are available at https://atmospherejs.com/. Some of these packages can help you solve the issue with using partials mentioned earlier. To use Bootstrap, you can use the meteor-bootstrap package. The meteor-bootstrap package can be found at https://github.com/Nemo64/meteor-bootstrap. The meteor-bootstrap package requires the installation of the Less package. Other packages provide you postprocsess tasks, such as autoprefixing your code. Ruby on rails and Less Ruby on Rails, or Rails, for short is a web application development framework written in the Ruby language. Those who want to start developing with Ruby on Rails can read the Getting Started with Rails guide, which can be found at http://guides.rubyonrails.org/getting_started.html. In this section, you can read how to integrate Less into a Ruby on Rails app. After installing the tools and components required for starting with Rails, you can launch a new application by running the following command on your console: > rails new blog Now, you should integrate Less with Rails. You can use less-rails (https://github.com/metaskills/less-rails) to bring Less to Rails. Open the Gemfile file, comment on the sass-rails gem, and add the less-rails gem, as shown here: #gem 'sass-rails', '~> 5.0' gem 'less-rails' # Less gem 'therubyracer' # Ruby Then, create a controller called welcome with an action called index by running the following command: > bin/rails generate controller welcome index The preceding command will generate app/views/welcome/index.html.erb. Open app/views/welcome/index.html.erb and make sure that it contains the HTML code as shown here: <h1>Welcome#index</h1> <p>Find me in app/views/welcome/index.html.erb</p> The next step is to create a file, app/assets/stylesheets/welcome.css.less, with the Less code. The Less code in app/assets/stylesheets/welcome.css.less looks as follows: @color: red; h1 { color: @color; } Now, start a web server with the following command: > bin/rails server Finally, you can visit the application at http://localhost:3000/. The application should look like the example shown here: The Rails app Summary In this article, you learned how to use Less WordPress, Play, Meteor, AngularJS, Ruby on Rails. Resources for Article: Further resources on this subject: Media Queries with Less [article] Bootstrap 3 and other applications [article] Getting Started with Bootstrap [article]
Read more
  • 0
  • 0
  • 5392

article-image-edx-e-learning-course-marketing
Packt
05 Jun 2015
9 min read
Save for later

edX E-Learning Course Marketing

Packt
05 Jun 2015
9 min read
In this article by Matthew A. Gilbert, the author of edX E-Learning Course Development, we are going to learn various ways of marketing. (For more resources related to this topic, see here.) edX's marketing options If you don't market your course, you might not get any new students to teach. Fortunately, edX provides you with an array of tools for this purpose, as follows: Creative Submission Tool: Submit the assets required for creating a page in your edX course using the Creative Submission Tool. You can also use those very materials in promoting the course. Access the Creative Submission Tool at https://edx.projectrequest.net/index.php/request. Logo and the Media Kit: Although these are intended for members of the media, you can also use the edX Media Kit for your promotional purposes: you can download high-resolution photos, edX logo visual guidelines (in Adobe Illustrator and EPS versions), key facts about edX, and answers to frequently asked questions. You can also contact the press office for additional information. You can find the edX Media Kit online at https://www.edx.org/media-kit. edX Learner Stories: Using stories of students who have succeeded with other edX courses is a compelling way to market the potential of your course. Using Tumblr, edX Learner Stories offers more than a dozen student profiles. You might want to use their stories directly or use them as a template for marketing materials of your own. Read edX Learner Stories at http://edxstories.tumblr.com. Social media marketing Traditional marketing tools and the options available in the edX Marketing Portal are a fitting first step in promoting your course. However, social media gives you a tremendously enhanced toolkit you can use to attract, convert, and transform spectators into students. When marketing your course with social media, you will also simultaneously create a digital footprint for yourself. This in turn helps establish your subject matter expertise far beyond one edX course. What's more, you won't be alone; there exists a large community of edX instructors and students, including those from other MOOC platforms already online. Take, for example, the following screenshot from edX's Twitter account (@edxonline). edX has embraced social media as a means of marketing and to create a practicing virtual community for those creating and taking their courses. Likewise, edX also actively maintains a page on Facebook, as follows: You can also see how active edX's YouTube channel is in the following screenshot. Note that there are both educational and promotional videos. To get you started in social media—if you're not already there—take a look at the list of 12 social media tools, as follows. Not all of these tools might be relevant to your needs, but consider the suggestions to decide how you might best use them, and give them a try: Facebook (https://www.facebook.com): Create a fan page for your edX course; you can re-use content from your course's About page such as your course intro video, course description, course image, and any other relevant materials. Be sure to include a link from the Facebook page for your course to its About page. Look for ways to share other content from your course (or related to your course) in a way that engages members of your fan page. Use your Facebook page to generate interest and answer questions from potential students. You might also consider creating a Facebook group. This can be more useful for current students to share knowledge during the class and to network once it's complete. Visit edX on Facebook at https://www.facebook.com/edX. Google+ (https://plus.google.com): Take the same approach as you did with your Facebook fan page. While this is not as engaging as Facebook, you might find that posting content on Google+ increases traffic to your course's About page due to the increased referrals you are likely to experience via Google search results. Add edX to your circles on Google+ at https://plus.google.com/+edXOnline/posts. Instagram (https://instagram.com): Share behind-the-scenes pictures of you and your staff for your course. Show your students what a day in your life is like, making sure to use a unique hashtag for your course. Picture the possibilities with edX on Instagram at https://instagram.com/edxonline/. LinkedIn (https://www.linkedin.com): Share information about your course in relevant LinkedIn groups, and post public updates about it in your personal account. Again, make sure you include a unique hashtag for your course and a link to the About page. Connect with edX on LinkedIn at https://www.linkedin.com/company/edx. Pinterest (https://www.pinterest.com): Share photos as with Instagram, but also consider sharing infographics about your course's subject matter or share infographics or imagers you use in your actual course as well. You might consider creating pin boards for each course, or one per pin board per module in a course. Pin edX onto your Pinterest pin board at https://www.pinterest.com/edxonline/. Slideshare (http://www.slideshare.net): If you want to share your subject matter expertise and thought leadership with a wider audience, Slideshare is a great platform to use. You can easily post your PowerPoint presentations, class documents or scholarly papers, infographics, and videos from your course or another topic. All of these can then be shared across other social media platforms. Review presentations from or about edX courses on Slideshare at http://www.slideshare.net/search/slideshow?searchfrom=header&q=edx. SoundCloud (https://soundcloud.com): With SoundCloud, you can share MP3 files of your course lectures or create podcasts related to your areas of expertise. Your work can be shared on Twitter, Tumblr, Facebook, and Foursquare, expanding your influence and audience exponentially. Listen to some audio content from Harvard University at https://soundcloud.com/harvard. Tumblr (https://www.tumblr.com): Resembling what the child of WordPress and Twitter might be like, Tumblr provides a platform to share behind-the-scenes text, photos, quotes, links, chat, audios, and videos of your edX course and the people who make it possible. Share a "day in the life" or document in real time, an interactive history of each edX course you teach. Read edX's learner stories at http://edxstories.tumblr.com. Twitter (https://twitter.com): Although messages on Twitter are limited to 140 characters, one tweet can have a big impact. For a faculty wanting to promote its edX course, it is an efficient and cost-effective option. Tweet course videos, samples of content, links to other curriculum, or promotional material. Engage with other educators who teach courses and retweet posts from academic institutions. Follow edX on Twitter at https://twitter.com/edxonline. You might also consider subscribing to edX's Twitter list of edX instructors at https://twitter.com/edXOnline/lists/edx-professors-teachers, and explore the Twitter accounts of edX courses by subscribing to that list at https://twitter.com/edXOnline/lists/edx-course-handles. Vine (https://vine.co): A short-format video service owned by Twitter, Vine provides you with 6 seconds to share your creativity, either in a continuous stream or smaller segments linked together like stop motion. You might create a vine showing the inner working of the course faculty and staff, or maybe even ask short questions related to the course content and invite people to reply with answers. Watch vines about MOOCs at https://vine.co. WordPress: WordPress gives you two options to manage and share content with students. With WordPress.com (https://wordpress.com), you're given a selection of standardized templates to use on a hosted platform. You have limited control but reasonable flexibility and limited, if any, expenses. With Wordpress.org (https://wordpress.org), you have more control but you need to host it on your own web server, which requires some technical know-how. The choice is yours. Read posts on edX on the MIT Open Matters blog on Wordpress.com at https://mitopencourseware.wordpress.com/category/edx/. YouTube (https://www.youtube.com): YouTube is the heart of your edX course. It's the core of your curriculum and the anchor of engagement for your students. When promoting your course, use existing videos from your curriculum in your social media campaigns, but identify opportunities to record short videos specifically for promoting your course. Watch course videos and promotional content on the edX YouTube channel at https://www.youtube.com/user/EdXOnline. Personal branding basics Additionally, whether the impact of your effort is immediately evident or not, your social media presence powers your personal brand as a professor. Why is that important? Read on to know. With the possible exception of marketing professors, most educators likely tend to think more about creating and teaching their course than promoting it—or themselves. Traditionally, that made sense, but it isn't practical in today's digitally connected world. Social media opens an area of influence where all educators—especially those teaching an edX course—should be participating. Unfortunately, many professors don't know where or how to start with social media. If you're teaching a course on edX, or even edX Edge, you will likely have some kind of marketing support from your university or edX. But if you are just in an organization using edX Code, or simply want to promote yourself and your edX course, you might be on your own. One option to get you started with social media is the Babb Group, a provider of resources and consulting for online professors, business owners, and real-estate investors. Its founder and CEO, Dani Babb (PhD), says this: "Social media helps you show that you are an expert in a given field. It is an important tool today to help you get hired, earn promotions, and increase your visibility." The Babb Group offers five packages focused on different social media platforms: Twitter, LinkedIn, Facebook, Twitter and Facebook, or Twitter with Facebook and LinkedIn. You can view the Babb Group's social media marketing packages at http://www.thebabbgroup.com/social-media-profiles-for-professors.html. Connect with Dani Babb on LinkedIn at https://www.linkedin.com/in/drdanibabb or on Twitter at https://twitter.com/danibabb Summary In this article, we tackled traditional marketing tools, identified options available from edX, discussed social media marketing, and explored personal branding basics. Resources for Article: Further resources on this subject: Constructing Common UI Widgets [article] Getting Started with Odoo Development [article] MODx Web Development: Creating Lists [article]
Read more
  • 0
  • 0
  • 5369

article-image-moodle-cims-installing-and-using-bulk-course-upload-tool
Packt
07 Jan 2011
7 min read
Save for later

Moodle CIMS: Installing and Using the Bulk Course Upload Tool

Packt
07 Jan 2011
7 min read
Moodle as a Curriculum and Information Management System Use Moodle to manage and organize your administrative duties; monitor attendance records, manage student enrolment, record exam results, and much more Transform your Moodle site into a system that will allow you to manage information such as monitoring attendance records, managing the number of students enrolled in a particular course, and inter-department communication Create courses for all subjects in no time with the Bulk Course Creation tool Create accounts for hundreds of users swiftly and enroll them in courses at the same time using a CSV file. Part of Packt's Beginner's Guide series: Readers are walked through each task as they read the book with the end result being a sample CIMS Moodle site Using the Bulk Course Upload tool Rather than creating course categories and then courses one at a time and assigning teachers to each course after the course is created, we can streamline the process through the use of the Bulk Course Upload tool. This tool allows you to organize all the information required to create your courses in a CSV (Comma Separated Values) file that is then uploaded into the creation tool and used to create all of your courses at once. Due to its design, the Bulk Course Upload tool only works with MySQL databases. Our MAMP package uses a MySQL database as do the LAMP packages. If your Moodle site is running on a database of a different variety you will not be able to use this tool. Time for action – installing the Bulk Course Upload tool Now that we have our teacher's accounts created, we are ready to use the Bulk Course Creation tool to create all of our courses. First we need to install the tool as an add-on admin report into our Moodle site. To install this tool, do the following: Go to the Modules and plugins area of www.moodle.org. Search for Bulk Course Upload tool. Click on Download latest version to download the tool to your computer. If this does not download the package to your hard drive and instead takes you to a forum in the Using Moodle course on Moodle.org, download the package that was posted in that forum on Sunday, 11 May 2008. Expand the package, contained within, and find the uploadcourse.php file. Place the uploadcourse.php file in your admin directory located inside your main Moodle directory. When logged in as admin, enter the following address in your browser address bar: http://localhost:8888/moodle19/admin/uploadcourse.php. (If you are not using a MAMP package, the first part of the address will of course be different.) You will then see the Upload Course tool explanation screen that looks like the following screenshot: The screen, shown in the previous screenshot, lists the thirty-nine different fields that can be included in a CSV file when creating courses in bulk via this tool. Most of the fields here control settings that are modified in individual courses by clicking on the Settings link found in the Administration block of each course. The following is an explanation of the fields with notes about which ones are especially useful when setting up Moodle as a CIMS: category: You will definitely want to specify categories in order to organize your courses. The best way to organize courses and categories here is such that the organization coincides with the organization of your curriculum as displayed in school documentation and student handbooks. If you already have categories in your Moodle site, make sure that you spell the categories exactly as they appear on your site, including capitalization. A mistake will result in the creation of a new category. This field should start with a forward slash followed by the category name with each subcategory also being followed by a forward slash (for example, /Listening/Advanced). cost: If students must pay to enroll in your courses, via the PayPal plugin, you may enter the cost here. You must have the PayPal plugin activated on your site, which can be done by accessing it via the Site Administration block by clicking on Courses and then Enrolments. Additionally, as this book goes to print, the ability to enter a field in the file used by the Bulk Course tool that allows you to set the enrolment plugin, is not yet available. Therefore, if you enter a cost value for a course, it will not be shown until the enrolment plugin for the course is changed manually by navigating to the course and editing the course through the Settings link found in the course Administration block. Check Moodle.org frequently for updates to the Bulk Course Upload tool as the feature should be added soon. enrolperiod: This controls the amount of time a student is enrolled in a course. The value must be entered in seconds so, for example, if you had a course that ran for one month and students were to be unenrolled after that period, you would set this value to 2,592,000 (60 seconds X 60 minutes per hour X 24 hours per day X 30 = 2,592,000). enrollable: This simply controls whether the course is enrollable or not. Entering a 0 will render the course unenrollable and a 1 will set the course to allow enrollments. enrolstartdate and enrolenddate: If you wish to set an enrollment period, you should enter the dates (start and end dates) in these two fields. The dates can be entered in the month/day/year format (for example, 8/1/10). expirynotify: Enter a 1 here to have e-mails sent to the teacher when a student is going to be unenrolled from a course. Enter a 0 to prevent e-mails from being sent when a student is going to be unenrolled. This setting is only functional when the enrolperiod value is set. expirythreshold: Enter the number of days in advance you want e-mails notifying of student unenrollment sent. The explanation file included calls for a value between 10 and 30 days but this value can actually be set to between 1 and 30 days. This setting is only functional when the enrolperiod value and expirynotify and/or notifystudents (see below) is/are set. format: This field controls the format of the course. As of Moodle 1.9.8+ there are six format options included in the standard package. The options are lams, scorm, social, topics, weeks, and weeks CSS, and any of these values can be entered in this field. fullname: This is the full name of the course you are creating (for example, History 101). groupmode: Set this to 0 for no groups, 1 for separate groups, and 2 for visible groups. groupmodeforce: Set this to 1 to force group mode at the course level and 0 to allow group mode to be set in each individual activity. guest: Use a 0 to prevent guests from accessing this course, a 1 to allow uests in the course, and a 2 to allow only guests who have the key into the course. idnumber: You can enter a course ID number using this field. This number is only used for administrative purposes and is not visible to students. This is a very useful field for institutions that use identification numbers for courses and can provide a link for connecting the courses within Moodle to other systems. If your institution uses any such numbering system it is recommended that you enter the appropriate numbers here. lang: This is the language setting for the course. Leaving this field blank will result in the Do not force language setting, which can be seen from the Settings menu accessed from within each individual course. Doing so will allow users to toggle between languages that have been installed in the site. To specify a language, and thus force the display of the course using this language, enter the language as it is displayed within the Moodle lang directory (for example, English = en_utf8). maxbytes: This field allows you to set the maximum size of individual files that are uploaded to the course. Leaving this blank will result in the course being created with the site wide maximum file upload size setting. Values must be entered in bytes (for example, 1 MB = 1,048,576 bytes). Refer to an online conversion site such as www.onlineconversion.com to help you determine the value you want to enter here. metacourse: If the course you are creating is a meta course, enter a 1, otherwise enter a 0 or leave the field blank.
Read more
  • 0
  • 0
  • 5355
article-image-teaching-special-kids-how-write-simple-sentences-and-paragraphs-using-moodle-19
Packt
12 Jul 2010
8 min read
Save for later

Teaching Special Kids How to Write Simple Sentences and Paragraphs using Moodle 1.9

Packt
12 Jul 2010
8 min read
Creating a sentence using certain words Last Saturday, Alice went to the circus with her mother. Today is Priscilla's birthday and Alice cannot wait to tell her friends about the funny and dangerous things she saw in the circus. She was really scared when she saw the lions jumping through the flaming hoops. She enjoyed the little dogs jumping and twirling, and the big seals spinning balls. However, she has to remember some of the shows. Shall we help her? Time for action – choosing and preparing the words to be used in a sentence We are first going to choose the words to be used in a sentence and then add a new advanced uploading of files activity to an existing Moodle course. Log in to your Moodle server. Click on the desired course name (Circus). As previously learned, follow the necessary steps to edit the summary for a desired week. Enter Exercise 1 in the Summary textbox and save the changes. Click on the Add an activity combo box for the selected week and choose Advanced uploading of files. Enter Creating a sentence using certain words in Assignment name. Select Verdana in font and 5 (18) in size—the first two combo boxes below Description. Click on the Font Color button (a T with six color boxes) and select your desired color for the text. Click on the big text box below Description and enter the following description of the student's goal for this exercise. You can use the enlarged editor window as shown in the next screenshot. Use a different font color for each of the three words: Lion, Hoops, and Flaming. Close the enlarged editor's window. Select 10MB in Maximum size. This is the maximum size for the file that each student is going to be able to upload as a result for this activity. However, it is very important to check the possibilities offered by your Moodle server with its Moodle administrator. Select 1 in Maximum number of uploaded files. Select Yes in Allow notes. This way, the student will be able to add notes with the sentence. Scroll down and click on the Save and display button. The web browser will show the description for the advanced uploading of files activity. What just happened? We added an advanced uploading of files activity to a Moodle course that will allow a student to write a sentence that has to include the three words specified in the notes section. The students are now going to be able to read the goals for this activity by clicking on its hyperlink on the corresponding week. They are then going to write the sentence and upload their voices with the description of the situation. We added the description of the goal and the three words to use in the sentence with customized fonts and colors using the online text activity editor features. Time for action – writing and recording the sentence We must first download and install Audacity 1.2. We will then help Alice to write a sentence, read it, and record her voice by using Audacity's features. If you do not have it yet, download and install Audacity 1.2 (http://audacity.sourceforge.net/download/). This software will allow the student to record his/her voice and save the recording as an MP3 file compatible with the previously explained Moodle multimedia plugins. In this case, we are covering a basic installation and usage for Audacity 1.2. The integration of sound and music elements for Moodle, including advanced usages for Audacity, is described in depth in Moodle 1.9 Multimedia by João Pedro Soares Fernandes, Packt Publishing. Start Audacity. Next, it is necessary to download the LAME MP3 encoder to make it possible for Audacity to export the recorded audio in the MP3 file format. Open your default web browser and go to the Audacity web page that displays the instructions to install the correct version of the LAME MP3 encoder, http://audacity.sourceforge.net/help/faq?s=install&item=lame-mp3. Click on the LAME download page hyperlink and click on the hyperlink under For Audacity on Windows, in this case, Lame_v3.98.2_for_Audacity_on_Windows.exe. Run the application, read the license carefully, and follow the necessary steps to finish the installation. The default folder for the LAME MP3 encoder is C:Program FilesLame for Audacity, as shown in the following screenshot: Minimize Audacity. Log in to your Moodle server using the student role. Click on the course name (Circus). Click on the Creating a sentence using certain words link on the corresponding week. The web browser will show the description for the activity and the three words to be used in the sentence. Click on the Edit button below Notes. Moodle will display a big text area with an HTML editor. Select Verdana in font and 5 (18) in size. Write a sentence, The lion jumps through the flaming hoops., as shown in the next screenshot: Go back to Audacity. Resize and move its window in order to be able to see the sentence you have recently written. Click on the Record button (the red circle) and start reading the sentence. Audacity will display the waveform of the audio track being recorded, as shown in the next screenshot: You need a microphone connected to the computer in order to record your voice with Audacity. Once you finish reading the sentence, click on the Stop button (the yellow square). Audacity will stop recording your voice. Select File | Export As MP3 from Audacity's main menu. Save the MP3 audio file as mysentence.mp3 in your documents folder. Audacity will display a message indicating that it uses the freely available LAME library to handle MP3 file encoding, as shown in the next screenshot: Click on Yes and browse to the folder where you installed the LAME MP3 encoder, by default, C:Program FilesLame for Audacity. Click on Open and Audacity will display a dialog box to edit some properties for the MP3 file. Click on OK and it will save the MP3 file, mysentence.mp3, in your documents folder. Next, go back to your web browser with the Moodle activity, scroll down, and click on the Save changes button. Click on the Browse button below Submission draft. Browse to the folder that holds your MP3 audio file with the recorded sentence, your documents folder, select the file to upload, mysentence.mp3, and click on Open. Then, click on Upload this file to upload the MP3 audio file to the Moodle server. The file name, mysentence.mp3, will appear below Submission draft if the MP3 file could finish the upload process without problems, as shown in the next screenshot. Next, click on Continue. Click on Send for marking and then on Yes. A new message, Assignment was already submitted for marking and cannot be updated, will appear below the Notes section with the sentence. Log out and log in with your normal user and role. You can check the submitted assignments by clicking on the Creating a sentence using certain words link on the corresponding week and then on View x submitted assignments. Moodle will display the links for the notes and the uploaded file for each student that submitted this assignment, as shown in the next screenshot. You will be able to read the notes and listen to the recorded sentence by clicking on the corresponding links. Once you have checked the results, click on Grade in the corresponding row in the grid. A feedback window will appear with a text editor and a drop-down list with the possible grades. Select the grade in the Grade drop-down list and write any feedback in the text editor, as shown in the next screenshot. Then click on Save changes. The final grade will appear in a corresponding cell in the grid. What just happened? In this activity, we defined a simple list of words and we asked the student to write a simple sentence. In this case, there is no image or multimedia resource, and therefore, they have to use their imagination. The child has to read and understand the three words. He/she has to associate them, imagine a situation and say and/or write a sentence. Sometimes, it is going to be too difficult for the child to write the sentence. In this case, he/she can work with the help of a therapist or a family member to run the previously explained software and record the sentence. This way, it is going to be possible to evaluate the results of this exercise even if the student cannot write a complete sentence with the words. Have a go hero – discussing the results in Moodle forums The usage of additional software to record the voice in order to solve the exercises can be challenging for the students and their parents. Prepare answers of frequently asked questions in the forums offered by Moodle. This way, you can interact with the students and their parents through other channels in Moodle, with different feedback possibilities. You can access the forums for each Moodle course by clicking on Forums in the Activities panel.
Read more
  • 0
  • 0
  • 5349

article-image-test-driven-data-model
Packt
13 Jan 2016
17 min read
Save for later

A Test-Driven Data Model

Packt
13 Jan 2016
17 min read
In this article by Dr. Dominik Hauser, author of Test-driven Development with Swift, we will cover the following topics: Implementing a To-Do item Implementing the location iOS apps are often developed using a design pattern called Model-View-Controller (MVC). In this pattern, each class (also, a struct or enum) is either a model object, a view, or a controller. Model objects are responsible to store data. They should be independent from the kind of presentation. For example, it should be possible to use the same model object for an iOS app and command-line tool on Mac. View objects are the presenters of data. They are responsible for making the objects visible (or in case of a VoiceOver-enabled app, hearable) for users. Views are special for the device that the app is executed on. In the case of a cross-platform application view, objects cannot be shared. Each platform needs its own implementation of the view layer. Controller objects communicate between the model and view objects. They are responsible for making the model objects presentable. We will use MVC for our to-do app because it is one of the easiest design patterns, and it is commonly used by Apple in their sample code. This article starts with the test-driven development of the model layer of our application. for more info: (For more resources related to this topic, see here.) Implementing the To-Do item A to-do app needs a model class/struct to store information for to-do items. We start by adding a new test case to the test target. Open the To-Do project and select the ToDoTests group. Navigate to File | New | File, go to iOS | Source | Unit Test Case Class, and click on Next. Put in the name ToDoItemTests, make it a subclass of XCTestCase, select Swift as the language, and click on Next. In the next window, create a new folder, called Model, and click on Create. Now, delete the ToDoTests.swift template test case. At the time of writing this article, if you delete ToDoTests.swift before you add the first test case in a test target, you will see a pop up from Xcode, telling you that adding the Swift file will create a mixed Swift and Objective-C target: This is a bug in Xcode 7.0. It seems that when adding the first Swift file to a target, Xcode assumes that there have to be Objective-C files already. Click on Don't Create if this happens to you because we will not use Objective-C in our tests. Adding a title property Open ToDoItemTests.swift, and add the following import expression right below import XCTest: @testable import ToDo This is needed to be able to test the ToDo module. The @testable keyword makes internal methods of the ToDo module accessible by the test case. Remove the two testExample() and testPerformanceExample()template test methods. The title of a to-do item is required. Let's write a test to ensure that an initializer that takes a title string exists. Add the following test method at the end of the test case (but within the ToDoItemTests class): func testInit_ShouldTakeTitle() {    ToDoItem(title: "Test title") } The static analyzer built into Xcode will complain about the use of unresolved identifier 'ToDoItem': We cannot compile this code because Xcode cannot find the ToDoItem identifier. Remember that not compiling a test is equivalent to a failing test, and as soon as we have a failing test, we need to write an implementation code to make the test pass. To add a file to the implementation code, first click on the ToDo group in Project navigator. Otherwise, the added file will be put into the test group. Go to File | New | File, navigate to the iOS | Source | Swift File template, and click on Next. Create a new folder called Model. In the Save As field, put in the name ToDoItem.swift, make sure that the file is added to the ToDo target and not to the ToDoTests target, and click on Create. Open ToDoItem.swift in the editor, and add the following code: struct ToDoItem { } This code is a complete implementation of a struct named ToDoItem. So, Xcode should now be able to find the ToDoItem identifier. Run the test by either going to Product | Test or use the ⌘U shortcut. The code does not compile because there is Extra argument 'title' in call. This means that at this stage, we could initialize an instance of ToDoItem like this: let item = ToDoItem() But we want to have an initializer that takes a title. We need to add a property, named title, of the String type to store the title: struct ToDoItem {    let title: String } Run the test again. It should pass. We have implemented the first micro feature of our to-do app using TDD. And it wasn't even hard. But first, we need to check whether there is anything to refactor in the existing test and implementation code. The tests and code are clean and simple. There is nothing to refactor as yet. Always remember to check whether refactoring is needed after you have made the tests green. But there are a few things to note about the test. First, Xcode shows a warning that Result of initializer is unused. To make this warning go away, assign the result of the initializer to an underscore _ = ToDoItem(title: "Test title"). This tells Xcode that we know what we are doing. We want to call the initializer of ToDoItem, but we do not care about its return value. Secondly, there is no XCTAssert function call in the test. To add an assert, we could rewrite the test as follows: func testInit_ShouldTakeTitle() {    let item = ToDoItem(title: "Test title")    XCTAssertNotNil(item, "item should not be nil") } But in Swift an non-failable initializer cannot return nil. It always returns a valid instance. This means that the XCTAssertNotNil() method is useless. We do not need it to ensure that we have written enough code to implement the tested micro feature. It is not needed to drive the development and it does not make the code better. In the following tests, we will omit the XCTAssert functions when they are not needed in order to make a test fail. Before we proceed to the next tests, let's set up the editor in a way that makes the TDD workflow easier and faster. Open ToDoItemTests.swift in the editor. Open Project navigator, and hold down the option key while clicking on ToDoItem.swift in the navigator to open it in the assistant editor. Depending on the size of your screen and your preferences, you might prefer to hide the navigator again. With this setup, you have the tests and code side by side, and switching from a test to code and vice versa takes no time. In addition to this, as the relevant test is visible while you write the code, it can guide the implementation. Adding an item description property A to-do item can have a description. We would like to have an initializer that also takes a description string. To drive the implementation, we need a failing test for the existence of that initializer: func testInit_ShouldTakeTitleAndDescription() {    _ = ToDoItem(title: "Test title",    itemDescription: "Test description") } Again, this code does not compile because there is Extra argument 'itemDescription' in call. To make this test pass, we add a itemDescription of type String? property to ToDoItem: struct ToDoItem {    let title: String    let itemDescription: String? } Run the tests. The testInit_ShouldTakeTitleAndDescription()test fails (that is, it does not compile) because there is Missing argument for parameter 'itemDescription' in call. The reason for this is that we are using a feature of Swift where structs have an automatic initializer with arguments setting their properties. The initializer in the first test only has one argument, and, therefore, the test fails. To make the two tests pass again, replace the initializer in testInit_ShouldTakeTitleAndDescription() with this: toDoItem(title: "Test title", itemDescription: nil) Run the tests to check whether all the tests pass again. But now the initializer in the first test looks bad. We would like to be able to have a short initializer with only one argument in case the to-do item only has a title. So, the code needs refactoring. To have more control over the initialization, we have to implement it ourselves. Add the following code to ToDoItem: init(title: String, itemDescription: String? = nil) {    self.title = title    self.itemDescription = itemDescription } This initializer has two arguments. The second argument has a default value, so we do not need to provide both arguments. When the second argument is omitted, the default value is used. Before we refactor the tests, run the tests to make sure that they still pass. Then, remove the second argument from the initializer in testInit_ShouldTakeTitle(): func testInit_ShouldTakeTitle() {    _ = ToDoItem(title: "Test title") } Run the tests again to make sure that everything still works. Removing a hidden source for bugs To be able to use a short initializer, we need to define it ourselves. But this also introduces a new source for potential bugs. We can remove the two micro features we have implemented and still have both tests pass. To see how this works, open ToDoItem.swift, and comment out the properties and assignment in the initializer: struct ToDoItem {    //let title: String    //let itemDescription: String?       init(title: String, itemDescription: String? = nil) {               //self.title = title        //self.itemDescription = itemDescription    } } Run the tests. Both tests still pass. The reason for this is that they do not check whether the values of the initializer arguments are actually set to any the ToDoItem properties. We can easily extend the tests to make sure that the values are set. First, let's change the name of the first test to testInit_ShouldSetTitle(), and replace its contents with the following code: let item = ToDoItem(title: "Test title") XCTAssertEqual(item.title, "Test title",    "Initializer should set the item title") This test does not compile because ToDoItem does not have a property title (it is commented out). This shows us that the test is now testing our intention. Remove the comment signs for the title property and assignment of the title in the initializer, and run the tests again. All the tests pass. Now, replace the second test with the following code: func testInit_ShouldSetTitleAndDescription() {    let item = ToDoItem(title: "Test title",        itemDescription: "Test description")      XCTAssertEqual(item.itemDescription , "Test description",        "Initializer should set the item description") } Remove the remaining comment signs in ToDoItem, and run the tests again. Both tests pass again, and they now test whether the initializer works. Adding a timestamp property A to-do item can also have a due date, which is represented by a timestamp. Add the following test to make sure that we can initialize a to-do item with a title, a description, and a timestamp: func testInit_ShouldSetTitleAndDescriptionAndTimestamp() {    let item = ToDoItem(title: "Test title",        itemDescription: "Test description",        timestamp: 0.0)      XCTAssertEqual(0.0, item.timestamp,        "Initializer should set the timestamp") } Again, this test does not compile because there is an extra argument in the initializer. From the implementation of the other properties, we know that we have to add a timestamp property in ToDoItem and set it in the initializer: struct ToDoItem {    let title: String    let itemDescription: String?    let timestamp: Double?       init(title: String,        itemDescription: String? = nil,        timestamp: Double? = nil) {                   self.title = title            self.itemDescription = itemDescription            self.timestamp = timestamp    } } Run the tests. All the tests pass. The tests are green, and there is nothing to refactor. Adding a location property The last property that we would like to be able to set in the initializer of ToDoItem is its location. The location has a name and can optionally have a coordinate. We will use a struct to encapsulate this data into its own type. Add the following code to ToDoItemTests: func testInit_ShouldSetTitleAndDescriptionAndTimestampAndLocation() {    let location = Location(name: "Test name") } The test is not finished, but it already fails because Location is an unresolved identifier. There is no class, struct, or enum named Location yet. Open Project navigator, add Swift File with the name Location.swift, and add it to the Model folder. From our experience with the ToDoItem struct, we already know what is needed to make the test green. Add the following code to Location.swift: struct Location {    let name: String } This defines a Location struct with a name property and makes the test code compliable again. But the test is not finished yet. Add the following code to testInit_ShouldSetTitleAndDescriptionAndTimestampAndLocation(): func testInit_ShouldTakeTitleAndDescriptionAndTimestampAndLocation() {    let location = Location(name: "Test name")    let item = ToDoItem(title: "Test title",        itemDescription: "Test description",        timestamp: 0.0,        location: location)      XCTAssertEqual(location.name, item.location?.name,        "Initializer should set the location") } Unfortunately, we cannot use location itself yet to check for equality, so the following assert does not work: XCTAssertEqual(location, item.location,    "Initializer should set the location") The reason for this is that the first two arguments of XCTAssertEqual() have to conform to the Equatable protocol. Again, this does not compile because the initializer of ToDoItem does not have an argument called location. Add the location property and the initializer argument to ToDoItem. The result should look like this: struct ToDoItem {    let title: String    let itemDescription: String?    let timestamp: Double?    let location: Location?       init(title: String,        itemDescription: String? = nil,        timestamp: Double? = nil,        location: Location? = nil) {                   self.title = title            self.itemDescription = itemDescription            self.timestamp = timestamp            self.location = location    } } Run the tests again. All the tests pass and there is nothing to refactor. We have now implemented a struct to hold the to-do items using TDD. Implementing the location In the previous section, we added a struct to hold the location information. We will now add tests to make sure Location has the needed properties and initializer. The tests could be added to ToDoItemTests, but they are easier to maintain when the test classes mirror the implementation classes/structs. So, we need a new test case class. Open Project navigator, select the ToDoTests group, and add a unit test case class with the name LocationTests. Make sure to go to iOS | Source | Unit Test Case Class because we want to test the iOS code and Xcode sometimes preselects OS X | Source. Choose to store the file in the Model folder we created previously. Set up the editor to show LocationTests.swift on the left-hand side and Location.swift in the assistant editor on the right-hand side. In the test class, add @testable import ToDo, and remove the testExample() and testPerformanceExample()template tests. Adding a coordinate property To drive the addition of a coordinate property, we need a failing test. Add the following test to LocationTests: func testInit_ShouldSetNameAndCoordinate() {    let testCoordinate = CLLocationCoordinate2D(latitude: 1,        longitude: 2)    let location = Location(name: "",        coordinate: testCoordinate)      XCTAssertEqual(location.coordinate?.latitude,        testCoordinate.latitude,        "Initializer should set latitude")    XCTAssertEqual(location.coordinate?.longitude,        testCoordinate.longitude,        "Initializer should set longitude") } First, we create a coordinate and use it to create an instance of Location. Then, we assert that the latitude and the longitude of the location's coordinate are set to the correct values. We use the 1 and 2 values in the initializer of CLLocationCoordinate2D because it has also an initializer that takes no arguments (CLLocationCoordinate2D()) and sets the longitude and latitude to zero. We need to make sure in the test that the initializer of Location assigns the coordinate argument to its property. The test does not compile because CLLocationCoordinate2D is an unresolved identifier. We need to import CoreLocation in LocationTests.swift: import XCTest @testable import ToDo import CoreLocation The test still does not compile because Location does not have a coordinate property yet. Like ToDoItem, we would like to have a short initializer for locations that only have a name argument. Therefore, we need to implement the initializer ourselves and cannot use the one provided by Swift. Replace the contents of Location.swift with the following code: import CoreLocation   struct Location {    let name: String    let coordinate: CLLocationCoordinate2D?       init(name: String,        coordinate: CLLocationCoordinate2D? = nil) {                   self.name = ""            self.coordinate = coordinate    } } Note that we have intentionally set the name in the initializer to an empty string. This is the easiest implementation that makes the tests pass. But it is clearly not what we want. The initializer should set the name of the location to the value in the name argument. So, we need another test to make sure that the name is set correctly. Add the following test to LocationTests: func testInit_ShouldSetName() {    let location = Location(name: "Test name")    XCTAssertEqual(location.name, "Test name",        "Initializer should set the name") } Run the test to make sure it fails. To make the test pass, change self.name = "" in the initializer of Location to self.name = name. Run the tests again to check that now all the tests pass. There is nothing to refactor in the tests and implementation. Let's move on. Summary In this article, we covered the implementation of a to-do item by adding a title property, item description property, timestamp property, and more. We also covered the implementation of a location using the coordinate property. Resources for Article: Further resources on this subject: Share and Share Alike [article] Introducing Test-driven Machine Learning[article] Testing a UI Using WebDriverJS [article]
Read more
  • 0
  • 0
  • 5341

article-image-working-gradle
Packt
11 Aug 2015
18 min read
Save for later

Working with Gradle

Packt
11 Aug 2015
18 min read
In this article by Mainak Mitra, author of the book Mastering Gradle, we cover some plugins such as War and Scala, which will be helpful in building web applications and Scala applications. Additionally, we will discuss diverse topics such as Property Management, Multi-Project build, and logging aspects. In the Multi-project build section, we will discuss how Gradle supports multi-project build through the root project's build file. It also provides the flexibility of treating each module as a separate project, plus all the modules together like a single project. (For more resources related to this topic, see here.) The War plugin The War plugin is used to build web projects, and like any other plugin, it can be added to the build file by adding the following line: apply plugin: 'war' War plugin extends the Java plugin and helps to create the war archives. The war plugin automatically applies the Java plugin to the build file. During the build process, the plugin creates a war file instead of a jar file. The war plugin disables the jar task of the Java plugin and adds a default war archive task. By default, the content of the war file will be compiled classes from src/main/java; content from src/main/webapp and all the runtime dependencies. The content can be customized using the war closure as well. In our example, we have created a simple servlet file to display the current date and time, a web.xml file and a build.gradle file. The project structure is displayed in the following screenshot: Figure 6.1 The SimpleWebApp/build.gradle file has the following content: apply plugin: 'war'   repositories { mavenCentral() }   dependencies { providedCompile "javax.servlet:servlet-api:2.5" compile("commons-io:commons-io:2.4") compile 'javax.inject:javax.inject:1' } The war plugin adds the providedCompile and providedRuntime dependency configurations on top of the Java plugin. The providedCompile and providedRuntime configurations have the same scope as compile and runtime respectively, but the only difference is that the libraries defined in these configurations will not be a part of the war archive. In our example, we have defined servlet-api as the providedCompile time dependency. So, this library is not included in the WEB-INF/lib/ folder of the war file. This is because this library is provided by the servlet container such as Tomcat. So, when we deploy the application in a container, it is added by the container. You can confirm this by expanding the war file as follows: SimpleWebApp$ jar -tvf build/libs/SimpleWebApp.war    0 Mon Mar 16 17:56:04 IST 2015 META-INF/    25 Mon Mar 16 17:56:04 IST 2015 META-INF/MANIFEST.MF    0 Mon Mar 16 17:56:04 IST 2015 WEB-INF/    0 Mon Mar 16 17:56:04 IST 2015 WEB-INF/classes/    0 Mon Mar 16 17:56:04 IST 2015 WEB-INF/classes/ch6/ 1148 Mon Mar 16 17:56:04 IST 2015 WEB-INF/classes/ch6/DateTimeServlet.class    0 Mon Mar 16 17:56:04 IST 2015 WEB-INF/lib/ 185140 Mon Mar 16 12:32:50 IST 2015 WEB-INF/lib/commons-io-2.4.jar 2497 Mon Mar 16 13:49:32 IST 2015 WEB-INF/lib/javax.inject-1.jar 578 Mon Mar 16 16:45:16 IST 2015 WEB-INF/web.xml Sometimes, we might need to customize the project's structure as well. For example, the webapp folder could be under the root project folder, not in the src folder. The webapp folder can also contain new folders such as conf and resource to store the properties files, Java scripts, images, and other assets. We might want to rename the webapp folder to WebContent. The proposed directory structure might look like this: Figure 6.2 We might also be interested in creating a war file with a custom name and version. Additionally, we might not want to copy any empty folder such as images or js to the war file. To implement these new changes, add the additional properties to the build.gradle file as described here. The webAppDirName property sets the new webapp folder location to the WebContent folder. The war closure defines properties such as version and name, and sets the includeEmptyDirs option as false. By default, includeEmptyDirs is set to true. This means any empty folder in the webapp directory will be copied to the war file. By setting it to false, the empty folders such as images and js will not be copied to the war file. The following would be the contents of CustomWebApp/build.gradle: apply plugin: 'war'   repositories { mavenCentral() } dependencies { providedCompile "javax.servlet:servlet-api:2.5" compile("commons-io:commons-io:2.4") compile 'javax.inject:javax.inject:1' } webAppDirName="WebContent"   war{ baseName = "simpleapp" version = "1.0" extension = "war" includeEmptyDirs = false } After the build is successful, the war file will be created as simpleapp-1.0.war. Execute the jar -tvf build/libs/simpleapp-1.0.war command and verify the content of the war file. You will find the conf folder is added to the war file, whereas images and js folders are not included. You might also find the Jetty plugin interesting for web application deployment, which enables you to deploy the web application in an embedded container. This plugin automatically applies the War plugin to the project. The Jetty plugin defines three tasks; jettyRun, jettyRunWar, and jettyStop. Task jettyRun runs the web application in an embedded Jetty web container, whereas the jettyRunWar task helps to build the war file and then run it in the embedded web container. Task jettyStopstops the container instance. For more information please refer to the Gradle API documentation. Here is the link: https://docs.gradle.org/current/userguide/war_plugin.html. The Scala plugin The Scala plugin helps you to build the Scala application. Like any other plugin, the Scala plugin can be applied to the build file by adding the following line: apply plugin: 'scala' The Scala plugin also extends the Java plugin and adds a few more tasks such as compileScala, compileTestScala, and scaladoc to work with Scala files. The task names are pretty much all named after their Java equivalent, simply replacing the java part with scala. The Scala project's directory structure is also similar to a Java project structure where production code is typically written under src/main/scala directory and test code is kept under the src/test/scala directory. Figure 6.3 shows the directory structure of a Scala project. You can also observe from the directory structure that a Scala project can contain a mix of Java and Scala source files. The HelloScala.scala file has the following content. The output is Hello, Scala... on the console. This is a very basic code and we will not be able to discuss much detail on the Scala programming language. We request readers to refer to the Scala language documentation available at http://www.scala-lang.org/. package ch6   object HelloScala {    def main(args: Array[String]) {      println("Hello, Scala...")    } } To support the compilation of Scala source code, Scala libraries should be added in the dependency configuration: dependencies { compile('org.scala-lang:scala-library:2.11.6') } Figure 6.3 As mentioned, the Scala plugin extends the Java plugin and adds a few new tasks. For example, the compileScala task depends on the compileJava task and the compileTestScala task depends on the compileTestJava task. This can be understood easily, by executing classes and testClasses tasks and looking at the output. $ gradle classes :compileJava :compileScala :processResources UP-TO-DATE :classes   BUILD SUCCESSFUL $ gradle testClasses :compileJava UP-TO-DATE :compileScala UP-TO-DATE :processResources UP-TO-DATE :classes UP-TO-DATE :compileTestJava UP-TO-DATE :compileTestScala UP-TO-DATE :processTestResources UP-TO-DATE :testClasses UP-TO-DATE   BUILD SUCCESSFUL Scala projects are also packaged as jar files. The jar task or assemble task creates a jar file in the build/libs directory. $ jar -tvf build/libs/ScalaApplication-1.0.jar 0 Thu Mar 26 23:49:04 IST 2015 META-INF/ 94 Thu Mar 26 23:49:04 IST 2015 META-INF/MANIFEST.MF 0 Thu Mar 26 23:49:04 IST 2015 ch6/ 1194 Thu Mar 26 23:48:58 IST 2015 ch6/Customer.class 609 Thu Mar 26 23:49:04 IST 2015 ch6/HelloScala$.class 594 Thu Mar 26 23:49:04 IST 2015 ch6/HelloScala.class 1375 Thu Mar 26 23:48:58 IST 2015 ch6/Order.class The Scala plugin does not add any extra convention to the Java plugin. Therefore, the conventions defined in the Java plugin, such as lib directory and report directory can be reused in the Scala plugin. The Scala plugin only adds few sourceSet properties such as allScala, scala.srcDirs, and scala to work with source set. The following task example displays different properties available to the Scala plugin. The following is a code snippet from ScalaApplication/build.gradle: apply plugin: 'java' apply plugin: 'scala' apply plugin: 'eclipse'   version = '1.0'   jar { manifest { attributes 'Implementation-Title': 'ScalaApplication',     'Implementation-Version': version } }   repositories { mavenCentral() }   dependencies { compile('org.scala-lang:scala-library:2.11.6') runtime('org.scala-lang:scala-compiler:2.11.6') compile('org.scala-lang:jline:2.9.0-1') }   task displayScalaPluginConvention << { println "Lib Directory: $libsDir" println "Lib Directory Name: $libsDirName" println "Reports Directory: $reportsDir" println "Test Result Directory: $testResultsDir"   println "Source Code in two sourcesets: $sourceSets" println "Production Code: ${sourceSets.main.java.srcDirs},     ${sourceSets.main.scala.srcDirs}" println "Test Code: ${sourceSets.test.java.srcDirs},     ${sourceSets.test.scala.srcDirs}" println "Production code output:     ${sourceSets.main.output.classesDir} &        ${sourceSets.main.output.resourcesDir}" println "Test code output: ${sourceSets.test.output.classesDir}      & ${sourceSets.test.output.resourcesDir}" } The output of the task displayScalaPluginConvention is shown in the following code: $ gradle displayScalaPluginConvention … :displayScalaPluginConvention Lib Directory: <path>/ build/libs Lib Directory Name: libs Reports Directory: <path>/build/reports Test Result Directory: <path>/build/test-results Source Code in two sourcesets: [source set 'main', source set 'test'] Production Code: [<path>/src/main/java], [<path>/src/main/scala] Test Code: [<path>/src/test/java], [<path>/src/test/scala] Production code output: <path>/build/classes/main & <path>/build/resources/main Test code output: <path>/build/classes/test & <path>/build/resources/test   BUILD SUCCESSFUL Finally, we will conclude this section by discussing how to execute Scala application from Gradle; we can create a simple task in the build file as follows. task runMain(type: JavaExec){ main = 'ch6.HelloScala' classpath = configurations.runtime + sourceSets.main.output +     sourceSets.test.output } The HelloScala source file has a main method which prints Hello, Scala... in the console. The runMain task executes the main method and displays the output in the console: $ gradle runMain .... :runMain Hello, Scala...   BUILD SUCCESSFUL Logging Until now we have used println everywhere in the build script to display the messages to the user. If you are coming from a Java background you know a println statement is not the right way to give information to the user. You need logging. Logging helps the user to classify the categories of messages to show at different levels. These different levels help users to print a correct message based on the situation. For example, when a user wants complete detailed tracking of your software, they can use debug level. Similarly, whenever a user wants very limited useful information while executing a task, they can use quiet or info level. Gradle provides the following different types of logging: Log Level Description ERROR This is used to show error messages QUIET This is used to show limited useful information WARNING This is used to show warning messages LIFECYCLE This is used to show the progress (default level) INFO This is used to show information messages DEBUG This is used to show debug messages (all logs) By default, the Gradle log level is LIFECYCLE. The following is the code snippet from LogExample/build.gradle: task showLogging << { println "This is println example" logger.error "This is error message" logger.quiet "This is quiet message" logger.warn "This is WARNING message" logger.lifecycle "This is LIFECYCLE message" logger.info "This is INFO message" logger.debug "This is DEBUG message" } Now, execute the following command: $ gradle showLogging   :showLogging This is println example This is error message This is quiet message This is WARNING message This is LIFECYCLE message   BUILD SUCCESSFUL Here, Gradle has printed all the logger statements upto the lifecycle level (including lifecycle), which is Gradle's default log level. You can also control the log level from the command line. -q This will show logs up to the quiet level. It will include error and quiet messages -i This will show logs up to the info level. It will include error, quiet, warning, lifecycle and info messages. -s This prints out the stacktrace for all exceptions. -d This prints out all logs and debug information. This is most expressive log level, which will also print all the minor details. Now, execute gradle showLogging -q: This is println example This is error message This is quiet message Apart from the regular lifecycle, Gradle provides an additional option to provide stack trace in case of any exception. Stack trace is different from debug. In case of any failure, it allows tracking of all the nested functions, which are called in sequence up to the point where the stack trace is generated. To verify, add the assert statement in the preceding task and execute the following: task showLogging << { println "This is println example" .. assert 1==2 }   $ gradle showLogging -s …… * Exception is: org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':showLogging'. at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter. executeActions(ExecuteActionsTaskExecuter.java:69)        at …. org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter. execute(SkipOnlyIfTaskExecuter.java:53)        at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter. execute(ExecuteAtMostOnceTaskExecuter.java:43)        at org.gradle.api.internal.AbstractTask.executeWithoutThrowingTaskFailure (AbstractTask.java:305) ... With stracktrace, Gradle also provides two options: -s or --stracktrace: This will print truncated stracktrace -S or --full-stracktrace: This will print full stracktrace File management One of the key features of any build tool is I/O operations and how easily you can perform the I/O operations such as reading files, writing files, and directory-related operations. Developers with Ant or Maven backgrounds know how painful and complex it was to handle the files and directory operations in old build tools; sometimes you had to write custom tasks and plugins to perform these kinds of operations due to XML limitations in Ant and Maven. Since Gradle uses Groovy, it will make your life much easier while dealing with files and directory-related operations. Reading files Gradle provides simple ways to read the file. You just need to use the File API (application programing interface) and it provides everything to deal with the file. The following is the code snippet from FileExample/build.gradle: task showFile << { File file1 = file("readme.txt") println file1   // will print name of the file file1.eachLine {    println it // will print contents line by line } } To read the file, we have used file(<file Name>). This is the default Gradle way to reference files because Gradle adds some path behavior ($PROJECT_PATH/<filename>) due to absolute and relative referencing of files. Here, the first println statement will print the name of the file which is readme.txt. To read a file, Groovy provides the eachLine method to the File API, which reads all the lines of the file one by one. To access the directory, you can use the following file API: def dir1 = new File("src") println "Checking directory "+dir1.isFile() // will return false   for directory println "Checking directory "+dir1.isDirectory() // will return true for directory Writing files To write to the files, you can use either the append method to add contents to the end of the file or overwrite the file using the setText or write methods: task fileWrite << { File file1 = file ("readme.txt")   // will append data at the end file1.append("nAdding new line. n")   // will overwrite contents file1.setText("Overwriting existing contents")   // will overwrite contents file1.write("Using write method") } Creating files/directories You can create a new file by just writing some text to it: task createFile << { File file1 = new File("newFile.txt") file1.write("Using write method") } By writing some data to the file, Groovy will automatically create the file if it does not exist. To write content to file you can also use the leftshift operator (<<), it will append data at the end of the file: file1 << "New content" If you want to create an empty file, you can create a new file using the createNewFile() method. task createNewFile << { File file1 = new File("createNewFileMethod.txt") file1.createNewFile() } A new directory can be created using the mkdir command. Gradle also allows you to create nested directories in a single command using mkdirs: task createDir << { def dir1 = new File("folder1") dir1.mkdir()   def dir2 = new File("folder2") dir2.createTempDir()   def dir3 = new File("folder3/subfolder31") dir3.mkdirs() // to create sub directories in one command } In the preceding example, we are creating two directories, one using mkdir() and the other using createTempDir(). The difference is when we create a directory using createTempDir(), that directory gets automatically deleted once your build script execution is completed. File operations We will see examples of some of the frequently used methods while dealing with files, which will help you in build automation: task fileOperations << { File file1 = new File("readme.txt") println "File size is "+file1.size() println "Checking existence "+file1.exists() println "Reading contents "+file1.getText() println "Checking directory "+file1.isDirectory() println "File length "+file1.length() println "Hidden file "+file1.isHidden()   // File paths println "File path is "+file1.path println "File absolute path is "+file1.absolutePath println "File canonical path is "+file1.canonicalPath   // Rename file file1.renameTo("writeme.txt")   // File Permissions file1.setReadOnly() println "Checking read permission "+ file1.canRead()+" write permission "+file1.canWrite() file1.setWritable(true) println "Checking read permission "+ file1.canRead()+" write permission "+file1.canWrite()   } Most of the preceding methods are self-explanatory. Try to execute the preceding task and observe the output. If you try to execute the fileOperations task twice, you will get the exception readme.txt (No such file or directory) since you have renamed the file to writeme.txt. Filter files Certain file methods allow users to pass a regular expression as an argument. Regular expressions can be used to filter out only the required data, rather than fetch all the data. The following is an example of the eachFileMatch() method, which will list only the Groovy files in a directory: task filterFiles << { def dir1 = new File("dir1") dir1.eachFileMatch(~/.*.groovy/) {    println it } dir1.eachFileRecurse { dir ->    if(dir.isDirectory()) {      dir.eachFileMatch(~/.*.groovy/) {        println it      }    } } } The output is as follows: $ gradle filterFiles   :filterFiles dir1groovySample.groovy dir1subdir1groovySample1.groovy dir1subdir2groovySample2.groovy dir1subdir2subDir3groovySample3.groovy   BUILD SUCCESSFUL Delete files and directories Gradle provides the delete() and deleteDir() APIs to delete files and directories respectively: task deleteFile << { def dir2 = new File("dir2") def file1 = new File("abc.txt") file1.createNewFile() dir2.mkdir() println "File path is "+file1.absolutePath println "Dir path is "+dir2.absolutePath file1.delete() dir2.deleteDir() println "Checking file(abc.txt) existence: "+file1.exists()+" and Directory(dir2) existence: "+dir2.exists() } The output is as follows: $ gradle deleteFile :deleteFile File path is Chapter6/FileExample/abc.txt Dir path is Chapter6/FileExample/dir2 Checking file(abc.txt) existence: false and Directory(dir2) existence: false   BUILD SUCCESSFUL The preceding task will create a directory dir2 and a file abc.txt. Then it will print the absolute paths and finally delete them. You can verify whether it is deleted properly by calling the exists() function. FileTree Until now, we have dealt with single file operations. Gradle provides plenty of user-friendly APIs to deal with file collections. One such API is FileTree. A FileTree represents a hierarchy of files or directories. It extends the FileCollection interface. Several objects in Gradle such as sourceSets, implement the FileTree interface. You can initialize FileTree with the fileTree() method. The following are the different ways you can initialize the fileTree method: task fileTreeSample << { FileTree fTree = fileTree('dir1') fTree.each {    println it.name } FileTree fTree1 = fileTree('dir1') {    include '**/*.groovy' } println "" fTree1.each {    println it.name } println "" FileTree fTree2 = fileTree(dir:'dir1',excludes:['**/*.groovy']) fTree2.each {    println it.absolutePath } } Execute the gradle fileTreeSample command and observe the output. The first iteration will print all the files in dir1. The second iteration will only include Groovy files (with extension .groovy). The third iteration will exclude Groovy files (with extension .groovy) and print other files with absolute path. You can also use FileTree to read contents from the archive files such as ZIP, JAR, or TAR files: FileTree jarFile = zipTree('SampleProject-1.0.jar') jarFile.each { println it.name } The preceding code snippet will list all the files contained in a jar file. Summary In this article, we have explored different topics of Gradle such as I/O operations, logging, Multi-Project build and testing using Gradle. We also learned how easy it is to generate assets for web applications and Scala projects with Gradle. In the Testing with Gradle section, we learned some basics to execute tests with JUnit and TestNG. In the next article, we will learn the code quality aspects of a Java project. We will analyze a few Gradle plugins such as Checkstyle and Sonar. Apart from learning these plugins, we will discuss another topic called Continuous Integration. These two topics will be combined and presented by exploration of two different continuous integration servers, namely Jenkins and TeamCity. Resources for Article: Further resources on this subject: Speeding up Gradle builds for Android [article] Defining Dependencies [article] Testing with the Android SDK [article]
Read more
  • 0
  • 0
  • 5320
Packt
14 Feb 2014
6 min read
Save for later

CreateJS – Performing Animation and Transforming Function

Packt
14 Feb 2014
6 min read
(For more resources related to this topic, see here.) Creating animations with CreateJS As you may already know, creating animations in web browsers during web development is a difficult job because you have to write code that has to work in all browsers; this is called browser compatibility. The good news is that CreateJS provides modules to write and develop animations in web browsers without thinking about browser compatibility. CreateJS modules can do this job very well and all you need to do is work with CreateJS API. Understanding TweenJS TweenJS is one of the modules of CreateJS that helps you develop animations in web browsers. We will now introduce TweenJS. The TweenJS JavaScript library provides a simple but powerful tweening interface. It supports tweening of both numeric object properties and CSS style properties, and allows you to chain tweens and actions together to create complex sequences.—TweenJS API Documentation What is tweening? Let us understand precisely what tweening means: Inbetweening or tweening is the process of generating intermediate frames between two images to give the appearance that the first image evolves smoothly into the second image.—Wikipedia The same as other CreateJS subsets, TweenJS contains many functions and methods; however, we are going to work with and create examples for specific basic methods, based on which you can read the rest of the documentation of TweenJS to create more complex animations. Understanding API and methods of TweenJS In order to create animations in TweenJS, you don't have to work with a lot of methods. There are a few functions that help you to create animations. Following are all the methods with a brief description: get: It returns a new tween instance. to: It queues a tween from the current values to the target properties. set: It queues an action to set the specified properties on the specified target. wait: It queues a wait (essentially an empty tween). call: It queues an action to call the specified function. play: It queues an action to play (un-pause) the specified tween. pause: It queues an action to pause the specified tween. The following is an example of using the Tweening API: var tween = createjs.Tween.get(myTarget).to({x:300},400). set({label:"hello!"}).wait(500).to({alpha:0,visible:false},1000). call(onComplete); The previous example will create a tween, which: Tweens the target to an x value of 300 with duration 400ms and sets its label to hello!. Waits 500ms. Tweens the target's alpha property to 0with duration 1s and sets the visible property to false. Finally, calls the onComplete function. Creating a simple animation Now, it's time to create our simplest animation with TweenJS. It is a simple but powerful API, which gives you the ability to develop animations with method chaining. Scenario The animation has a red ball that comes from the top of the Canvas element and then drops down. In the preceding screenshot, you can see all the steps of our simple animation; consequently, you can predict what we need to do to prepare this animation. In our animation,we are going to use two methods: get and to. The following is the complete source code for our animation: var canvas = document.getElementById("canvas"); var stage = new createjs.Stage(canvas); var ball = new createjs.Shape(); ball.graphics.beginFill("#FF0000").drawCircle(0, 0, 50); ball.x = 200; ball.y = -50; var tween = createjs.Tween.get(ball) to({ y: 300 }, 1500, createjs.Ease.bounceOut); stage.addChild(ball); createjs.Ticker.addEventListener("tick", stage); In the second and third line of the JavaScript code snippet, two variables are declared, namely; the canvas and stage objects. In the next line, the ball variable is declared, which contains our shape object. In the following line, we drew a red circle with the drawCircle method. Then, in order to set the coordinates of our shape object outside the viewport, we set the x axis to -50 px. After this, we created a tween variable, which holds the Tween object; then, using the TweenJS method chaining, the to method is called with duration of 1500 ms and the y property set to 300 px. The third parameter of the to method represents the ease function of tween, which we set to bounceOut in this example. In the following lines, the ball variable is added to Stage and the tick event is added to the Ticker class to keep Stage updated while the animation is playing. You can also find the Canvas element in line 30, using which all animations and shapes are rendered in this element. Transforming shapes CreateJS provides some functions to transform shapes easily on Stage. Each DisplayObject has a setTransform method that allows the transforming of a DisplayObject (like a circle). The following shortcut method is used to quickly set the transform properties on the display object. All its parameters are optional. Omitted parameters will have the default value set. setTransform([x=0] [y=0] [scaleX=1] [scaleY=1] [rotation=0] [skewX=0] [skewY=0] [regX=0] [regY=0]) Furthermore, you can change all the properties via DisplayObject directly (like scaleY and scaleX) as shown in the following example: displayObject.setTransform(100, 100, 2, 2); An example of Transforming function As an instance of using the shape transforming feature with CreateJS, we are going to extend our previous example: var angle = 0; window.ball; var canvas = document.getElementById("canvas"); var stage = new createjs.Stage(canvas); ball = new createjs.Shape(); ball.graphics.beginFill("#FF0000").drawCircle(0, 0, 50); ball.x = 200; ball.y = 300; stage.addChild(ball); function tick(event) { angle += 0.025; var scale = Math.cos(angle); ball.setTransform(ball.x, ball.y, scale, scale); stage.update(event); } createjs.Ticker.addEventListener("tick", tick); In this example, we have a red circle, similar to the previous example of tweening. We set the coordinates of the circle to 200 and 300 and added the circle to the stage object. In the next line, we have a tick function that transforms the shape of the circle. Inside this function, we have an angle variable that increases with each call. We then set the ballX and ballY coordinate to the cosine of the angle variable. The transforming done is similar to the following screenshot: This is a basic example of transforming shapes in CreateJS, but obviously, you can develop and create better transforming by playing with a shape's properties and values. Summary In this article, we covered how to use animation and transform objects on the page using CreateJS. Resources for Article: Further resources on this subject: Introducing a feature of IntroJs [Article] So, what is Node.js? [Article] So, what is Ext JS? [Article]
Read more
  • 0
  • 0
  • 5275

article-image-roles-and-permissions-moodle-administration-part2
Packt
23 Oct 2009
5 min read
Save for later

Roles and Permissions in Moodle Administration-part2

Packt
23 Oct 2009
5 min read
Capabilities and Permissions So far, we have given users existing roles in different Moodle contexts. In the following few pages, we want to have a look at the inside of a role that is called capabilities and permissions. Once we have understood them, we will be able to modify existing roles and create entirely new custom ones. Role Definitions Existing roles are accessed via Users | Permissions | Define Roles in the Site Administration block. The screen that will be shown is similar to the familiar roles assignment screen, but has a very different purpose: When you click on a role name, its composition is shown. Each role contains a unique Name, a unique Short name (used when uploading users), and an optional Description. The Legacy role type has been introduced for backward compatibility, to allow old legacy code that has not been fully ported to work with the new system comprising new roles and capabilities. It is expected that this facility will disappear in the future (this might be for some time since a lot of core code depends on it), and should be ignored in due course unless you are working with legacy code or third-party add-ons. In addition to these four fields, each role consists of a large number of capabilities. Currently, Moodle's roles system contains approximately 200 capabilities. A capability is a description of a particular Moodle feature (for example) to grade assignments or to edit a Wiki page. Each capability represents a permissible Moodle action: Permission is a capability and its value, taken together. So each row of the table in the screen shot represents permission. The left column is the capability name and the radio buttons specify the value. So now permission has a description, a unique name, a value, and up to four associated risks. The description, for example, Approve course creation provides a short explanation of the capability. On clicking, the description or the online Moodle documentation is opened in a separate browser. The name, for instance moodle /site: approvecourse, follows a strict naming convention that identifies the capability in the overall role system: level/type: function. The level states to which part of Moodle the capability belongs (such as moodle, mod, block, gradereport, or enroll). The type is the class of the capability and the function identifies the actual functionality. The permission of each capability has to have one of the four values: Permission Description Not Set By default, all permissions for a new role are set to this value. The value in the context where it will be assigned will be inherited from the parent-context. To determine what this value is, Moodle searches upward through each context, until it 'finds' an explicit value (Allow, Prevent or Prohibit) for this capability, i.e. the search terminates when an explicit permission is found. For example, if a role is assigned to a user in a Course context, and a capability has a value of 'Not set,' then the actual permission will be whatever the user has at the category level, or, failing to find an explicit permission at the category level, at the site level. If no explicit permission is found, then the value in the current context becomes Prevent. Allow To grant permission for a capability choose Allow. It applies in the context in which the role will be assigned and all contexts which are below it (children, grand-children, etc). For example, when assigned in the course context, students will be able to start new discussions in all forums in that course, unless some forum contains an override or a new assignment with a Prevent or Prohibit value for this capability. Prevent To remove permission for a capability choose Prevent. If it has been granted in a higher context (no matter at what level), it will be overridden. The value can be overridden again in a lower context. Prohibit This is the same as Prevent, but the value cannot be overridden again in a lower context. The value is rarely needed, but useful when an admin wants to prohibit a user from certain functionality throughout the entire site, in which case the capability is set to Prohibit and then assigned in the site context.   Principally, permissions at lower contexts override permissions at higher contexts. The exception is "Prohibit", which by definition cannot be overridden at lower levels. Resolving Permission Conflicts There is a possibility of conflict if two users are assigned the same role in the same context, where one role allows a capability and the other prevents it. In this case, Moodle will look upwards in higher contexts for a decider. This does not apply to Guest accounts, where "Prevent" will be used by default. For example, a user has two roles in the Course context, one that allows functionality and one that prevents it. In this case, Moodle checks the Category and the System contexts respectively, looking for another defined permission. If none is found, then the permission is set to "Prevent". Permission Risks Additionally, Moodle displays the risks associated with each capability, that is, the risks that each capability can potentially raise. They can be any combination of the following four risk types: Risk Icon Description Configuration Users can change site configuration and behavior. XSS Users can add files and texts that allow cross-site scripting (potentially malicious scripts which are embedded in web pages and executed on the user's computer). Privacy Users can gain access to private information of other users. Spam Users can send spam to site users or others. Risks are only displayed. It is not possible to change these settings, since they only act as warnings. When you click on a risk icon, the "Risks" documentation page is opened in a separate browser window. Moodle's default roles have been designed with the following capability risks in mind:
Read more
  • 0
  • 0
  • 5139
Modal Close icon
Modal Close icon