Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Server-Side Web Development

406 Articles
article-image-ejb-31-controlling-security-programmatically-using-jaas
Packt
17 Jun 2011
5 min read
Save for later

EJB 3.1: Controlling Security Programmatically Using JAAS

Packt
17 Jun 2011
5 min read
  EJB 3.1 Cookbook Build real world EJB solutions with a collection of simple but incredibly effective recipes The reader is advised to refer the initial two recipies from the previous article on the process of handling security using annotations. Getting ready Programmatic security is affected by adding code within methods to determine who the caller is and then allowing certain actions to be performed based on their capabilities. There are two EJBContext interface methods available to support this type of security: getCallerPrincipal and isCallerInRole. The SessionContext object implements the EJBContext interface. The SessionContext's getCallerPrincipal method returns a Principal object which can be used to get the name or other attributes of the user. The isCallerInRole method takes a string representing a role and returns a Boolean value indicating whether the caller of the method is a member of the role or not. The steps for controlling security programmatically involve: Injecting a SessionContext instance Using either of the above two methods to effect security How to do it... To demonstrate these two methods we will modify the SecurityServlet to use the VoucherManager's approve method and then augment the approve method with code using these methods. First modify the SecurityServlet try block to use the following code. We create a voucher as usual and then follow with a call to the submit and approve methods. out.println("<html>"); out.println("<head>"); out.println("<title>Servlet SecurityServlet</title>"); out.println("</head>"); out.println("<body>"); voucherManager.createVoucher("Susan Billings", "SanFrancisco", BigDecimal.valueOf(2150.75)); voucherManager.submit(); boolean voucherApproved = voucherManager.approve(); if(voucherApproved) { out.println("<h3>Voucher was approved</h3>"); } else { out.println("<h3>Voucher was not approved</h3>"); } out.println("<h3>Voucher name: " + voucherManager.getName() + "</h3>"); out.println("</body>"); out.println("</html>"); Next, modify the VoucherManager EJB by injecting a SessionContext object using the @Resource annotation. public class VoucherManager { ... @Resource private SessionContext sessionContext; Let's look at the getCallerPrincipal method first. This method returns a Principal object (java.security.Principal) which has only one method of immediate interest: getName. This method returns the name of the principal. Modify the approve method so it uses the SessionContext object to get the Principal and then determines if the name of the principal is "mary" or not. If it is, then approve the voucher. public boolean approve() { Principal principal = sessionContext.getCallerPrincipal(); System.out.println("Principal: " + principal.getName()); if("mary".equals(principal.getName())) { voucher.setApproved(true); System.out.println("approve method returned true"); return true; } else { System.out.println("approve method returned false"); return false; } } Execute the SecurityApplication using "mary" as the user. The application should approve the voucher with the output as shown in the following screenshot: Execute the application again with a user of "sally". This execution will result in an exception. INFO: Access exception The getCallerPrincipal method simply returns the principal. This frequently results in the need to explicitly include the name of a user in code. The hard coding of user names is not recommended. Checking against each individual user can be time consuming. It is more efficient to check to see if a user is in a role. The isCallerInRole method allows us to determine whether the user is in a particular role or not. It returns a Boolean value indicating whether the user is in the role specified by the method's string argument. Rewrite the approve method to call the isCallerInRole method and pass the string "manager" to it. If the return value returns true, approve the voucher. public boolean approve() { if(sessionContext.isCallerInRole("manager")) { voucher.setApproved(true); System.out.println("approve method returned true"); return true; } else { System.out.println("approve method returned false"); return false; } } Execute the application using both "mary" and "sally". The results of the application should be the same as the previous example where the getCallerPrincipal method was used. How it works... The SessionContext class was used to obtain either a Principal object or to determine whether a user was in a particular role or not. This required the injection of a SessionContext instance and adding code to determine if the user was permitted to perform certain actions. This approach resulted in more code than the declarative approach. However, it provided more flexibility in controlling access to the application. These techniques provided the developer with choices as to how to best meet the needs of the application. There's more... It is possible to take different actions depending on the user's role using the isCallerInRole method. Let's assume we are using programmatic security with multiple roles. @DeclareRoles ({"employee", "manager","auditor"}) We can use a validateAllowance method to accept a travel allowance amount and determine whether it is appropriate based on the role of the user. public boolean validateAllowance(BigDecimal allowance) { if(sessionContext.isCallerInRole("manager")) { if(allowance.compareTo(BigDecimal.valueOf(2500)) <= 0) { return true; } else { return false; } } else if(sessionContext.isCallerInRole("employee")) { if(allowance.compareTo(BigDecimal.valueOf(1500)) <= 0) { return true; } else { return false; } } else if(sessionContext.isCallerInRole("auditor")) { if(allowance.compareTo(BigDecimal.valueOf(1000)) <= 0) { return true; } else { return false; } } else { return false; } } The compareTo method compares two BigDecimal values and returns one of three values: -1 – If the first number is less than the second number 0 – If the first and second numbers are equal 1 – If the first number is greater than the second number The valueOf static method converts a number to a BigDecimal value. The value is then compared to allowance. Summary This article covered programmatic EJB security based upon the Java Authentication and Authorization Service (JAAS) API. Further resources on this subject: EJB 3.1: Introduction to Interceptors [Article] EJB 3.1: Working with Interceptors [Article] Hands-on Tutorial on EJB 3.1 Security [Article] EJB 3 Entities [Article] Developing an EJB 3.0 entity in WebLogic Server [Article] Building an EJB 3.0 Persistence Model with Oracle JDeveloper [Article] NetBeans IDE 7: Building an EJB Application [Article]
Read more
  • 0
  • 0
  • 3223

article-image-integrating-moodle-20-mahara-and-googledocs-business
Packt
29 Apr 2011
9 min read
Save for later

Integrating Moodle 2.0 with Mahara and GoogleDocs for Business

Packt
29 Apr 2011
9 min read
Moodle 2.0 for Business Beginner's Guide Implement Moodle in your business to streamline your interview, training, and internal communication processes         The Repository integration allows admins to set up external content management systems and use them to complement Moodle's own file management system. Using this integration you can now manage content outside of Moodle and publish it to the system once the document or other content is ready. The Portfolio integration enables users to store their Moodle content in an external e-portfolio system to share with evaluators, peers, and others. Using Google Docs as a repository for Moodle A growing number of organizations are using Google Docs as their primary office suite. Moodle allows you to add Google Docs as a repository so your course authors can link to word processing, spreadsheet, and presentation and form documents on Google Docs. Time for action - configuring the Google Docs plugin To use Google Docs as a repository for Moodle, we first need to configure the plugin like we did with Alfresco. Login to Moodle as a site administrator. From the Site Administration menu, select Plugins and then Repositories. Select Manage Repositories from the Repositories menu. Next to the Google Docs plugin, select Enabled and Visible from the Active menu. On the Configure Google Docs plugin page, give the plugin a different name if you refer to Google Docs as something different in your organization. Click on Save. What just happened You have now set up the Google Docs repository plugin. Each user will have access to their Google Docs account when they add content to Moodle. Time for action - adding a Google Doc to your Moodle course After you have configured the Google Docs plugin, you can add Google Docs to your course. Login to Moodle as a user with course editing privileges. Turn on the editing mode and select File from the Add a resource.. menu in the course section where you want the link to appear. Give the file a name. Remember the name will be the link the user selects to get the file, so be descriptive. Add a description of the file. In the Content section, click the Add.. button to bring up the file browser. Click the Google Docs plugin in the File Picker pop-up window. The first time you access Google Docs from Moodle, you will see a login button on the screen. Click the button and Moodle will take you to the Google Docs login page. Login to Google Docs. Docs will now display a security warning, letting you know an external application (Moodle) is trying to access your file repository. Click on the Grant Access button at the bottom of the screen. Now you will be taken back to the File Picker. Select the file you want to link to your course. If you want to rename the document when it is linked to Moodle, rename it in the Save As text box. Then edit the Author field if necessary and choose a copyright license. Click on Select this file. Select the other options for the file as described in Getting Started with Moodle 2.0 for Business. Click on Save and return to course. What just happened You have now added a Google Doc to your Moodle course. You can add any of the Google Doc types to your course and share them with Moodle users. Google Docs File Formats The Moodle Google Docs plugin makes a copy of the document in a standard office format (rtf, xls, or ppt). When you save the file, any edits to the document after you save it to Moodle will not be displayed. Have a go hero Try importing the other Google Docs file formats into your Moodle course and test the download. Time for reflection Using Google Docs effectively requires clear goals, planning, integration with organizational workflows, and training. If you want to link Moodle with an external content repository, how will you ensure the implementation is successful? What business processes could you automate by using one of these content services? Exporting content to e-portfolios Now that we've integrated Moodle with external content repositories it's time to turn our attention to exporting content from Moodle. The Moodle 2 portfolio system allows users to export Moodle content in standard formats, so they can share their work with other people outside of Moodle, or organize their work into portfolios aimed at a variety of audiences. In a corporate environment, portfolios can be used to demonstrate competency for promotion or performance measurement. They can also be used as a directory of expertise within a company, so others can find people they need for special projects. One of the more popular open source portfolio systems is called Mahara. Mahara is a dedicated e-portfolio system for creating collections of work and then creating multiple views on those collections for specific audiences. It also includes a blogging platform, resume builder, and social networking tools. In recent versions, Mahara has begun to incorporate social networking features to enable users to find others with similar interests or specific skill sets. To start, we'll briefly look at installing Mahara, then work through the integration of Moodle with Mahara. Once we've got the two systems talking to each other, we can look at how to export content from Moodle to Mahara and then display it in an e-portfolio. Time for action - installing Mahara Mahara is a PHP and MySQL application like Moodle. Mahara and Moodle share a very similar architecture, and are designed to be complementary in many respects. You can use the same server setup we've already created for Moodle in Getting Started with Moodle 2.0 for Business. However, we need to create a new database to house the Mahara data as well as ensure Mahara has its own space to operate. Go to http://mahara.org. There is a Download link on the right side of the screen. Download the latest stable version (version 1.3 as of this writing). You will need version 1.3 or later to fully integrate with Moodle 2. For the best results, follow the instructions on the Installing Mahara wiki page, http://wiki.mahara.org/System_Administrator%27s_Guide/Installing_Mahara. If you are installing Mahara on the same personal machine as Moodle, be sure to put the Mahara folder at your web server's root level and keep it separate from Moodle. Your URL for Mahara should be similar to your URL for Moodle. What just happened You have now installed Mahara on your test system. Once you have Mahara up and running on your test server, you can begin to integrate Mahara with Moodle. Time for action - configuring the networking and SSO To begin the process of configuring Moodle and Mahara to work together, we need to enable Moodle Networking. You will need to make sure you have xmlrpc, curl, and openssl installed and configured in your PHP build. Networking allows Moodle to share users and authentication with another system. In this case, we are configuring Moodle to allow Moodle users to automatically login to Mahara when they login to Moodle. This will create a more seamless experience for the users and enable them to move back and forth between the systems. The steps to configure the Mahara portfolio plugin are as follows: From the Site administration menu, select Advanced features. Find the Networking option and set it to On. Select Save changes. The Networking option will then appear in the site admin menu. Select Networking, then Manage Peers. In the Add a new host form, copy the URL of your Mahara site into the hostname field and then select Mahara as the server type. Open a new window and login to your Mahara site as the site admin. Select the Site Admin tab. On your Mahara site, select Configure Site. Then select Networking. Copy the public key from the BEGIN tag to the END CERTIFICATE and paste it into the Public Key field in the Moodle networking form. On the resulting page, select the Services tab to set up the services necessary to integrate the portfolio. You will now need to configure the SSO services. Moodle and Mahara can make the following services available for the other system to consume. Moodle/Mahara Services Descriptions Remote enrollment service: Publish: If you Publish the Remote Enrollment Service, Mahara admins will be able to enroll students in Moodle courses. To enable this, you must also publish to the Single Sign On Service Provider service. Subscribe: Subscribe allows you to remotely enroll students in courses on the remote server. It doesn't apply in the context of Mahara. Portfolio Services: You must enable both Publish and Subscribe to allow users to send content to Mahara. SSO: (Identity Provider) If you Publish the SSO service, users can go from Moodle to Mahara without having to login again. If you Subscribe to this service, users can go from Mahara to Moodle without having to login again. SSO: (Service Provider) This is the converse of Identity Provider service. If you enabled Publish previously, you must enable Subscribe here. If you enabled Subscribe previously, you must enable Publish here. Click on Save changes. What just happened You have just enabled Single Sign-On between Moodle and Mahara. We are now halfway through the setup and now we can configure the Mahara to listen for Moodle users. Have a go hero Moodle Networking is also used to enable Moodle servers to communicate with each other. The Moodle Hub system is designed on top of Moodle networking to enable teachers to share courses with each other, and enable multiple Moodle servers to share users. How could you use this feature to spread Moodle within your organization? Could you create an internal and an external facing Moodle and have them talk to each other? Could different departments each use a Moodle and share access to courses using Moodle networking? For your "have a go hero" activity, design a plan to use Moodle networking within your organization.
Read more
  • 0
  • 0
  • 3221

article-image-setup-routine-enterprise-spring-application
Packt
14 Jan 2016
6 min read
Save for later

Setup Routine for an Enterprise Spring Application

Packt
14 Jan 2016
6 min read
In this article by Alex Bretet, author of the book Spring MVC Cookbook, you will learn to install Eclipse for Java EE developers and Java SE 8. (For more resources related to this topic, see here.) Introduction The choice of the Eclipse IDE needs to be discussed as there is some competition in this domain. Eclipse is popular in the Java community for being an active open source product; it is consequently accessible online to anyone with no restrictions. It also provides, among other usages, a very good support to web implementations, particularly to MVC approaches. Why use the Spring Framework? The Spring Framework and its community have also contributed to pull forward the Java platform for more than a decade. Presenting the whole framework in detail would require us to write more than a article. However, the core functionality based on the principles of Inversion of Control and Dependency Injection through a performant access to the bean repository allows massive reusability. Staying lightweight, the Spring Framework secures great scaling capabilities and could probably suit all modern architectures. The following recipe is about downloading and installing the Eclipse IDE for JEE developers and downloading and installing JDK 8 Oracle Hotspot. Getting ready This first sequence could appear as redundant or unnecessary with regard to your education or experience. For instance, you will, for sure, stay away from unidentified bugs (integration or development). You will also be assured of experiencing the same interfaces as the presented screenshots and figures. Also, because third-party products are living, you will not have to face the surprise of encountering unexpected screens or windows. How to do it... You need to perform the following steps to install the Eclipse IDE: Download a distribution of the Eclipse IDE for Java EE developers. We will be using in this article an Eclipse Luna distribution. We recommend you to install this version, which can be found at https://www.eclipse.org/downloads/packages/eclipse-ide-java-ee-developers/lunasr1, so that you can follow along with our guidelines and screenshots completely. Download a Luna distribution for the OS and environment of your choice: The product to be downloaded is not a binary installer but a ZIP archive. If you feel confident enough to use another version (more actual) of the Eclipse IDE for Java EE developers, all of them can be found at https://www.eclipse.org/downloads. For the upcoming installations, on Windows, a few target locations are suggested to be at the root directory C:. To avoid permission-related issues, it would be better if your Windows user is configured to be a local administrator. If you can't be part of this group, feel free to target installation directories you have write access to. Extract the downloaded archive into an eclipse directory:     If you are on Windows, archive into the C:Users{system.username}eclipse directory     If you are using Linux, archive into the /home/usr/{system.username}/eclipse directory     If you are using Mac OS X, archive into the /Users/{system.username}/eclipse directory Select and download a JDK 8. We suggest you to download the Oracle Hotspot JDK. Hotspot is a performant JVM implementation that has originally been built by Sun Microsystems. Now owned by Oracle, the Hotspot JRE and JDK are downloadable for free. Choose the product corresponding to your machine through Oracle website's link, http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html. To avoid a compatibility issue later on, do stay consistent with the architecture choice (32 or 64 bits) that you have made earlier for the Eclipse archive. Install the JDK 8. On Windows, perform the following steps:     Execute the downloaded file and wait until you reach the next installation step.     On the installation-step window, pay attention to the destination directory and change it to C:javajdk1.8.X_XX (X_XX as the latest current version. We will be using jdk1.8.0_25 in this article.)     Also, it won't be necessary to install an external JRE, so uncheck the Public JRE feature. On a Linux/Mac OS, perform the following steps:     Download the tar.gz archive corresponding to your environment.     Change the current directory to where you want to install Java. For easier instructions, let's agree on the /usr/java directory.     Move the downloaded tar.gz archive to this current directory.     Unpack the archive with the following command line, targeting the name of your archive: tar zxvf jdk-8u25-linux-i586.tar.gz (This example is for a binary archive corresponding to a Linux x86 machine).You must end up with the /usr/java/jdk1.8.0_25 directory structure that contains the subdirectories /bin, /db, /jre, /include, and so on. How it works… Eclipse for Java EE developers We have installed the Eclipse IDE for Java EE developers. Comparatively to Eclipse IDE for Java developers, there are some additional packages coming along, such as Java EE Developer Tools, Data Tools Platform, and JavaScript Development Tools. This version is appreciated for its capability to manage development servers as part of the IDE itself and its capability to customize the Project Facets and to support JPA. The Luna version is officially Java SE 8 compatible; it has been a decisive factor here. Choosing a JVM The choice of JVM implementation could be discussed over performance, memory management, garbage collection, and optimization capabilities. There are lots of different JVM implementations, and among them, a couple of open source solutions, such as OpenJDK and IcedTea (RedHat). It really depends on the application requirements. We have chosen Oracle Hotspot from experience, and from reference implementations, deployed it in production; it can be trusted for a wide range of generic purposes. Hotspot also behaves very well to run Java UI applications. Eclipse is one of them. Java SE 8 If you haven't already played with Scala or Clojure, it is time to take the functional programming train! With Java SE 8, Lambda expressions reduce the amount of code dramatically with an improved readability and maintainability. We won't implement only this Java 8 feature, but it being probably the most popular, it must be highlighted as it has given a massive credit to the paradigm change. It is important nowadays to feel familiar with these patterns. Summary In this article, you learned how to install Eclipse for Java EE developers and Java SE 8. Resources for Article: Further resources on this subject: Support for Developers of Spring Web Flow 2[article] Design with Spring AOP[article] Using Spring JMX within Java Applications[article]
Read more
  • 0
  • 0
  • 3194

article-image-cxf-architecture
Packt
07 Jan 2010
8 min read
Save for later

CXF architecture

Packt
07 Jan 2010
8 min read
The following figure shows the overall architecture: Bus Bus is the backbone of the CXF architecture. The CXF bus is comprised of a Spring-based configuration file, namely, cxf.xml which is loaded upon servlet initialization through SpringBusFactory. It defines a common context for all the endpoints. It wires all the runtime infrastructure components and provides a common application context. The SpringBusFactory scans and loads the relevant configuration files in the META-INF/cxf directory placed in the classpath and accordingly builds the application context. It builds the application context from the following files: META-INF/cxf/cxf.xml META-INF/cxf/cxf-extension.xml META-INF/cxf/cxf-property-editors.xml The XML file is part of the installation bundle's core CXF library JAR. Now, we know that CXF internally uses Spring for its configuration. The following XML fragment shows the bus definition in the cxf.xml file. <bean id="cxf" class="org.apache.cxf.bus.CXFBusImpl" /> The core bus component is CXFBusImpl . The class acts more as an interceptor provider for incoming and outgoing requests to a web service endpoint. These interceptors, once defined, are available to all the endpoints in that context. The cxf.xml file also defines other infrastructure components such as BindingFactoryManager, ConduitFactoryManager, and so on. These components are made available as bus extensions. One can access these infrastructure objects using the getExtension method. These infrastructure components are registered so as to get and update various service endpoint level parameters such as service binding, transport protocol, conduits, and so on. CXF bus architecture can be overridden, but one must apply caution when overriding the default bus behavior. Since the bus is the core component that loads the CXF runtime, many shared objects are also loaded as part of this runtime. You want to make sure that these objects are loaded when overriding the existing bus implementation. You can extend the default bus to include your own custom components or service objects such as factory managers. You can also add interceptors to the bus bean. These interceptors defined at the bus level are available to all the endpoints. The following code shows how to create a custom bus: SpringBeanFactory.createBus("mycxf.xml") SpringBeanFactory class is used to create a bus. You can complement or overwrite the bean definitions that the original cxf.xml file would use. For the CXF to load the mycxf.xml file, it has to be in the classpath or you can use a factory method to load the file. The following code illustrates the use of interceptors at the bus level: <bean id="cxf" class="org.apache.cxf.bus.spring.SpringBusImpl"> <property name="outInterceptors"> <list> <ref bean="myLoggingInterceptor"/> </list> </property></bean><bean id="myLogHandler" class="org.mycompany.com.cxf.logging. LoggingInterceptor"> ...</bean> The preceding bus definition adds the logging interceptor that will perform logging for all outgoing messages. Frontend CXF provides the concept of frontend modeling, which lets you create web services using different frontend APIs. The APIs let you create a web service using simple factory beans and JAX-WS implementation. It also lets you create dynamic web service clients. The primary frontend supported by CXF is JAX-WS. JAX-WS JAX-WS is a specification that establishes the semantics to develop, publish, and consume web services. JAX-WS simplifies web service development. It defines Java-based APIs that ease the development and deployment of web services. The specification supports WS-Basic Profile 1.1 that addresses web service interoperability. It effectively means a web service can be invoked or consumed by a client written in any language. JAX-WS also defines standards such as JAXB and SAAJ. CXF provides support for complete JAX-WS stack. JAXB provides data binding capabilities by providing a convenient way to map XML schema to a representation in Java code. The JAXB shields the conversion of XML schema messages in SOAP messages to Java code without the developers seeing XML and SOAP parsing. JAXB specification defines the binding between Java and XML Schema. SAAJ provides a standard way of dealing with XML attachments contained in a SOAP message. JAX-WS also speeds up web service development by providing a library of annotations to turn Plain Old Java classes into web services and specifies a detailed mapping from a service defined in WSDL to the Java classes that will implement that service. Any complex types defined in WSDL are mapped into Java classes following the mapping defined by the JAXB specification. As discussed earlier, two approaches for web service development exist: Code-First and Contract-First. With JAX-WS, you can perform web service development using one of the said approaches, depending on the nature of the application. With the Code-first approach, you start by developing a Java class and interface and annotating the same as a web service. The approach is particularly useful where Java implementations are already available and you need to expose implementations as services. You typically create a Service Endpoint Interface (SEI) that defines the service methods and the implementation class that implements the SEI methods. The consumer of a web service uses SEI to invoke the service functions. The SEI directly corresponds to a wsdl:portType element. The methods defined by SEI correspond to the wsdl:operation element. @WebServicepublic interface OrderProcess { String processOrder(Order order);} JAX-WS makes use of annotations to convert an SEI or a Java class to a web service. In the above example, the @WebService annotation defined above the interface declaration signifies an interface as a web service interface or Service Endpoint Interface. In the Contract-first approach, you start with the existing WSDL contract, and generate Java class to implement the service. The advantage is that you are sure about what to expose as a service since you define the appropriate WSDL Contract-first. Again the contract definitions can be made consistent with respect to data types so that it can be easily converted in Java objects without any portability issue. WSDL contains different elements that can be directly mapped to a Java class that implements the service. For example, the wsdl:portType element is directly mapped to SEI, type elements are mapped to Java class types through the use of Java Architecture of XML Binding (JAXB), and the wsdl:service element is mapped to a Java class that is used by a consumer to access the web service. The WSDL2Java tool can be used to generate a web service from WSDL. It has various options to generate SEI and the implementation web service class. As a developer, you need to provide the method implementation for the generated class. If the WSDL includes custom XML Schema types, then the same is converted into its equivalent Java class. Simple frontend Apart from JAX-WS frontend, CXF also supports what is known as 'simple frontend'. The simple frontend provides simple components or Java classes that use reflection to build and publish web services. It is simple because we do not use any annotation to create web services. In JAX-WS, we have to annotate a Java class to denote it as a web service and use tools to convert between a Java object and WSDL. The simple frontend uses factory components to create a service and the client. It does so by using Java reflection API. The following code shows a web service created using simple frontend: // Build and publish the serviceOrderProcessImpl orderProcessImpl = new OrderProcessImpl();ServerFactoryBean svrFactory = new ServerFactoryBean();svrFactory.setServiceClass(OrderProcess.class);svrFactory.setAddress("http://localhost:8080/OrderProcess");svrFactory.setServiceBean(orderProcessImpl);svrFactory.create(); Messaging and Interceptors One of the important elements of CXF architecture is the Interceptor components. Interceptors are components that intercept the messages exchanged or passed between web service clients and server components. In CXF, this is implemented through the concept of Interceptor chains. The concept of Interceptor chaining is the core functionality of CXF runtime. The interceptors act on the messages which are sent and received from the web service and are processed in chains. Each interceptor in a chain is configurable, and the user has the ability to control its execution. The core of the framework is the Interceptor interface. It defines two abstract methods—handleMessage and handleFault. Each of the methods takes the object of type Message as a parameter. A developer implements the handleMessage to process or act upon the message. The handleFault method is implemented to handle the error condition. Interceptors are usually processed in chains with every interceptor in the chain performing some processing on the message in sequence, and the chain moves forward. Whenever an error condition arises, a handleFault method is invoked on each interceptor, and the chain unwinds or moves backwards. Interceptors are often organized or grouped into phases. Interceptors providing common functionality can be grouped into one phase. Each phase performs specific message processing. Each phase is then added to the interceptor chain. The chain, therefore, is a list of ordered interceptor phases. The chain can be created for both inbound and outbound messages. A typical web service endpoint will have three interceptor chains: Inbound messages chain Outbound messages chain Error messages chain There are built-in interceptors such as logging, security, and so on, and the developers can also choose to create custom interceptors.
Read more
  • 0
  • 0
  • 3166

article-image-highcharts
Packt
20 Aug 2013
5 min read
Save for later

Highcharts

Packt
20 Aug 2013
5 min read
(For more resources related to this topic, see here.) Creating a line chart with a time axis and two Y axes We will now create the code for this chart: You start the creation of your chart by implementing the constructor of your Highcharts' chart: var chart = $('#myFirstChartContainer').highcharts({}); We will now set the different sections inside the constructor. We start by the chart section. Since we'll be creating a line chart, we define the type element with the value line. Then, we implement the zoom feature by setting the zoomType element. You can set the value to x, y, or xy depending on which axes you want to be able to zoom. For our chart, we will implement the possibility to zoom on the x-axis: chart: {type: 'line',zoomType: 'x'}, We define the title of our chart: title: {text: 'Energy consumption linked to the temperature'}, Now, we create the x axis. We set the type to datetime because we are using time data, and we remove the title by setting the text to null. You need to set a null value in order to disable the title of the xAxis: xAxis: {type: 'datetime',title: {text: null}}, We then configure the Y axes. As defined, we add two Y axes with the titles Temperature and Electricity consumed (in KWh), which we override with a minimum value of 0. We set the opposite parameter to true for the second axis in order to have the second y axis on the right side: yAxis: [{title: {text: 'Temperature'},min:0},{title: {text: 'Energy consumed (in KWh)'},opposite:true,min:0}], We will now customize the tooltip section. We use the crosshairs option in order to have a line for our tooltip that we will use to follow values of both series. Then, we set the shared value to true in order to have values of both series on the same tooltip. tooltip: {crosshairs: true,shared: true}, Further, we set the series section. For the datetime axes, you can set your series section by using two different ways. You can use the first way when your data follow a regular time interval and the second way when your data don't necessarily follow a regular time interval. We will use both the ways by setting the two series with two different options. The first series follows a regular interval. For this series, we set the pointInterval parameter where we define the data interval in milliseconds. For our chart, we set an interval of one day. We set the pointStart parameter with the date of the first value. We then set the data section with our values. The tooltip section is set with the valueSuffix element, where we define the suffix to be added after the value inside our tool tip. We set our yAxis element with the axis we want to associate with our series. Because we want to set this series to the first axis, we set the value to 0(zero). For the second series, we will use the second way because our data is not necessarily following the regular intervals. But you can also use this way, even if your data follows a regular interval. We set our data by couple, where the first element represents the date and the second element represents the value. We also override the tooltip section of the second series. We then set the yAxis element with the value 1 because we want to associate this series to the second axis. For your chart, you can also set your date values with a timestamp value instead of using the JavaScript function Date.UTC. series: [{name: 'Temperature',pointInterval: 24 * 3600 * 1000,pointStart: Date.UTC(2013, 0, 01),data: [17.5, 16.2, 16.1, 16.1, 15.9, 15.8, 16.2],tooltip: {valueSuffix: ' °C'},yAxis: 0},{name: 'Electricity consumption',data: [[Date.UTC(2013, 0, 01), 8.1],[Date.UTC(2013, 0, 02), 6.2],[Date.UTC(2013, 0, 03), 7.3],[Date.UTC(2013, 0, 05), 7.1],[Date.UTC(2013, 0, 06), 12.3],[Date.UTC(2013, 0, 07), 10.2]],tooltip: {valueSuffix: ' KWh'},yAxis: 1}] You should have this as the final code: $(function () {var chart = $(‘#myFirstChartContainer’).highcharts({chart: {type: ‘line’,zoomType: ‘x’},title: {text: ‘Energy consumption linked to the temperature’},xAxis: {type: ‘datetime’,title: {text: null}},yAxis: [{title: {text: ‘Temperature’},min:0},{title: {text: ‘Electricity consumed’},opposite:true,min:0}],tooltip: {crosshairs: true,shared: true},series: [{name: ‘Temperature’,pointInterval: 24 * 3600 * 1000,pointStart: Date.UTC(2013, 0, 01),data: [17.5, 16.2, 16.1, 16.1, 15.9, 15.8, 16.2],tooltip: {valueSuffix: ‘ °C’},yAxis: 0},{name: ‘Electricity consumption’,data: [[Date.UTC(2013, 0, 01), 8.1],[Date.UTC(2013, 0, 02), 6.2],[Date.UTC(2013, 0, 03), 7.3],[Date.UTC(2013, 0, 05), 7.1],[Date.UTC(2013, 0, 06), 12.3],[Date.UTC(2013, 0, 07), 10.2]],tooltip: {valueSuffix: ‘ KWh’},yAxis: 1}]});}); You should have the expected result as shown in the following screenshot: Summary In this article, we learned how to perform a task with the most important features of Highcharts. We created a line chart with a time axis and two Y-axes and realized that there are a wide variety of things that you can do with it. Also, we learned about the most commonly performed tasks and most commonly used features in Highcharts. Resources for Article : Further resources on this subject: Converting tables into graphs (Advanced) [Article] Line, Area, and Scatter Charts [Article] Data sources for the Charts [Article]
Read more
  • 0
  • 0
  • 3155

article-image-tips-deploying-sakai
Packt
19 Jul 2011
10 min read
Save for later

Tips for Deploying Sakai

Packt
19 Jul 2011
10 min read
  Sakai CLE Courseware Management: The Official Guide The benefits of knowing that frameworks exist Sakai is built on top of numerous third-party open source libraries and frameworks. Why write code for converting from XML text files to Java objects or connecting and managing databases, when others have specialized and thought out the technical problems and found appropriate and consistent solutions? This reuse of code saves effort and decreases the complexity of creating new functionality. Using third-party frameworks has other benefits as well; you can choose the best from a series of external libraries, increasing the quality of your own product. The external frameworks have their own communities who test them actively. Outsourcing generic requirements, such as the rudiments of generating indexes for searching, allows the Sakai community to concentrate on higher-level goals, such as building new tools. For developers, also for course instructors and system administrators, it is useful background to know, roughly, what the underlying frameworks do: For a developer, it makes sense to look at reuse first. Why re-invent the wheel? Why write with external framework X for manipulating XML files when other developers have already extensively tried and tested and are running framework Y? Knowing what others have done saves time. This knowledge is especially handy for the new-to-Sakai developers who could be tempted to write from scratch. For the system administrator, each framework has its own strengths, weaknesses, and terminology. Understanding the terminology and technologies gives you a head start in debugging glitches and communicating with the developers. For a manager, knowing that Sakai has chosen solid and well-respected open source libraries should help influence buying decisions in favor of this platform. For the course instructor, knowing which frameworks exist and what their potential is helps inform the debate about adding interesting new features. Knowing what Sakai uses and what is possible sharpens the instructors' focus and the ability to define realistic requirements. For the software engineering student, Sakai represents a collection of best practices and frameworks that will make the students more saleable in the labor market. Using the third-party frameworks This section details frameworks that Sakai is heavily dependent on: Spring (http://www.springsource.org/), Hibernate ((http://www.hibernate.org/), and numerous Apache projects (http://www.apache.org/). Generally, Java application builders understand these frameworks. This makes it relatively easier to hire programmers with experience. All projects are open source and the individual use does not clash with Sakai's open source license (http://www.opensource.org/licenses/ecl2.php). The benefit of using Spring Spring is a tightly architected set of frameworks designed to support the main goals of building modern business applications. Spring has a broad set of abilities, from connecting to databases, to transaction, managing business logic, validation, security, and remote access. It fully supports the most modern architectural design patterns. The framework takes away a lot of drudgery for a programmer and enables pieces of code to be plugged in or to be removed by editing XML configuration files rather than refactoring the raw code base itself. You can see for yourself; this is the best framework for the user provider within Sakai. When you log in, you may want to validate the user credentials using a piece of code that connects to a directory service such as LDAP , or replace the code with another piece of code that gets credentials from an external database or even reads from a text file. Thanks to Sakai's services that rely on Spring! You can give (called injecting) the wanted code to a Service manager, which then calls the code when needed. In Sakai terminology, within a running application a service manager manages services for a particular type of data. For example, a course service manager allows programmers to add, modify, or delete courses. A user service manager does the same for users. Spring is responsible for deciding which pieces of code it injects into which service manager, and developers do not need to program the heavy lifting, only the configuration. The advantage is that later, as a part of adapting Sakai to a specific organization, system administrators can also reconfigure authentication or many other services to tailor to local preferences without recompilation. Spring abstracts away underlying differences between different databases. This allows you to program once, each for MySQL , Oracle , and so on, without taking into account the databases' differences. Spring can sit on the top of Hibernate and other limited frameworks, such as JDBC (yet another standard for connecting to databases). This adaptability gives architects more freedom to change and refactor (the process of changing the structure of the code to improve it) without affecting other parts of the code. As Sakai grows in code size, Spring and good architectural design patterns diminish the chance breaking older code. To sum up, the Spring framework makes programming more efficient. Sakai relies on the main framework. Many tasks that programmers would have previously hard coded are now delegated to XML configuration files. Hibernate for database coupling Hibernate is all about coupling databases to the code. Hibernate is a powerful, high performance object/relational persistence and query service. That is to say, a designer describes Java objects in a specific structure within XML files. After reading these files, Hibernate gains the ability to save or load instances of the object from the database. Hibernate supports complex data structures, such as Java collections and arrays of objects. Again, it is a choice of an external framework that does the programmer's dog work, mostly via XML configuration. The many Apache frameworks Sakai is biased rightfully towards projects associated with the Apache Software Foundation (ASF) (http://www.apache.org/). Sakai instances run within a Tomcat server and many institutes place an Apache web server in front of the Tomcat server to deal with dishing out static content (content that does not change, such as an ordinary web page), SSL/TLS, ease of configuration, and log parsing. Further, individual internal and external frameworks make use of the Apache commons frameworks, (http://commons.apache.org/) which have reusable libraries for all kinds of specific needs, such as validation, encoding, e-mailing, uploading files, and so on. Even if a developer does not use the common libraries directly, they are often called by other frameworks and have significant impact on the wellbeing; for example, security of a Sakai instance. To ensure look and feel consistency, designers used common technologies, such as Apache Velocity, Apache Wicket , Apache MyFaces (an implementation of Java Server Faces), Reasonable Server Faces (RSF) , and plain old Java Server Pages (JSP) Apache Velocity places much of the look and feel in text templates that non-programmers can then manipulate with text editors. The use of Velocity is mostly superseded by JSF. However, as Sakai moves forward, technologies such as RSF and Wicket (http://wicket.apache.org/) are playing a predominate role. Sakai uses XML as the format of choice to support much of its functionality, from configuration files, to the backing up of sites and the storage of internal data representations, RSS feeds, and so on. There is a lot of runtime effort in converting to and from XML and translating XML into other formats. Here are the gory technical details: there are two main methods for parsing XML: You can parse (another word for process) XML into a Document Object Model (DOM) in the memory that you can later transverse and manipulate programmatically. XML can also be parsed via an event-driven mechanism where Java methods are called, for example, when an XML tag begins or ends, or there is a body to the tag. Programmatically simple API for XML (SAX) libraries support the second approach in Java. Generally, it is easier to program with DOM than SAX, but as you need a model of the XML in memory, DOM, by its nature, is more memory intensive. Why would that matter? In large-scale deployments, the amount of memory tends to limit a Sakai instance's performance rather than Sakai being limited by the computational power of the servers. Therefore, as Sakai heavily uses XML, whenever possible, a developer should consider using SAX and avoid keeping the whole model of the XML document in memory. Looking at dependencies As Sakai adapts and expands its feature set, expect the range of external libraries to expand. The table mentions libraries used, their links to the relevant home page, and a very brief description of their functionality. Name Homepage Description Apache-Axis http://ws.apache.org/axis/ SOAP web services Apache-Axis2 http://ws.apache.org/axis2   SOAP, REST web services. A total rewrite of Apache-axis. However, not currently used within Entity Broker, a Sakai specific component.   Apache Commons http://commons.apache.org Lower-level utilities Batik http://xmlgraphics.apache.org/batik/ Batik is a Java-based toolkit for applications or applets that want to use images in the Scalable Vector Graphics (SVG) format. Commons-beanutils http://commons.apache.org/beanutils/ Methods for Java bean manipulation Commons-codec http://commons.apache.org/codec Commons Codec provides implementations of common encoders and decoders, such as Base64, Hex, Phonetic, and URLs. Commons-digester http://commons.apache.org/digester Common methods for initializing objects from XML configuration Commons-httpclient http://hc.apache.org/httpcomponents-client Supports HTTP-based standards with the client side in mind Commons-logging http://commons.apache.org/logging/ Logging support Commons-validator http://commons.apache.org/validator Support for verifying the integrity of received data Excalibur http://excalibur.apache.org Utilities FOP http://xmlgraphics.apache.org/fop Print formatting ready for conversions to PDF and a number of other formats Hibernate http://www.hibernate.org ORM database framework Log4j http://logging.apache.org/log4j For logging Jackrabbit http://jackrabbit. apache.org http://jcp.org/en/jsr/detail?id=170 Content repository. A content repository is a hierarchical content store with support for structured and unstructured content, full text search, versioning, transactions, observation, and more. James http://james.apache.org A mail server Java Server Faces http://java.sun.com/javaee/javaserverfaces Simplifies building user interfaces for JavaServer applications Lucene http://lucene.apache.org Indexing MyFaces http://myfaces.apache.org JSF implementation with implementation-specific widgets Pluto http://portals.apache.org/pluto The Reference Implementation of the Java Portlet Specfication Quartz http://www.opensymphony.com/quartz Scheduling Reasonable Server Faces (RSF) http://www2.caret.cam.ac.uk/rsfwiki RSF is built on the Spring framework, and simplifies the building of views via XHTML. ROME https://rome.dev.java.net ROME is a set of open source Java tools for parsing, generating, and publishing RSS and Atom feeds. SAX http://www.saxproject.org Event-based XML parser STRUTS http://struts.apache.org/ Heavy-weight MVC framework, not used in the core of Sakai, but rather some components used as part of the occasional tool Spring http://www.springsource.org Used extensively within the code base of Sakai. It is a broad framework that is designed to make building business applications simpler. Tomcat http://tomcat.apache.org Servlet container Velocity http://velocity.apache.org Templating Wicket http://wicket.apache.org Web app development framework Xalan http://xml.apache.org/xalan-j An XSLT (Extensible Stylesheet Language Transformation) processor for transforming XML documents into HTML, text, or other XML document types xerces http://xerces.apache.org/xerces-j XML parser For the reader who has downloaded and built Sakai from source code, you can automatically generate a list of current external dependencies via Maven. First, you will need to build the binary version and then print out the dependency report. To achieve this from within the top-level directory of the source code, you can run the following commands: mvn -Ppack-demo install mvn dependency:list The table is based on an abbreviated version of the dependency list, generated from the source code from March 2009. For those of you wishing to dive into the depths of Sakai, you can search the home pages mentioned in the table. In summary, Spring is the most important underlying third-party framework and Sakai spends a lot of its time manipulating XML.  
Read more
  • 0
  • 0
  • 3145
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-creating-real-time-widget
Packt
22 Apr 2014
11 min read
Save for later

Creating a real-time widget

Packt
22 Apr 2014
11 min read
(For more resources related to this topic, see here.) The configuration options and well thought out methods of socket.io make for a highly versatile library. Let's explore the dexterity of socket.io by creating a real-time widget that can be placed on any website and instantly interfacing it with a remote Socket.IO server. We're doing this to begin providing a constantly updated total of all users currently on the site. We'll name it the live online counter (loc for short). Our widget is for public consumption and should require only basic knowledge, so we want a very simple interface. Loading our widget through a script tag and then initializing the widget with a prefabricated init method would be ideal (this allows us to predefine properties before initialization if necessary). Getting ready We'll need to create a new folder with some new files: widget_server.js, widget_client.js, server.js, and index.html. How to do it... Let's create the index.html file to define the kind of interface we want as follows: <html> <head> <style> #_loc {color:blue;} /* widget customization */ </style> </head> <body> <h1> My Web Page </h1> <script src = http://localhost:8081 > </script> <script> locWidget.init(); </script> </body> </html> The localhost:8081 domain is where we'll be serving a concatenated script of both the client-side socket.io code and our own widget code. By default, Socket.IO hosts its client-side library over HTTP while simultaneously providing a WebSocket server at the same address, in this case localhost:8081. See the There's more… section for tips on how to configure this behavior. Let's create our widget code, saving it as widget_client.js: ;(function() { window.locWidget = { style : 'position:absolute;bottom:0;right:0;font-size:3em', init : function () { var socket = io.connect('http://localhost:8081'), style = this.style; socket.on('connect', function () { var head = document.head, body = document.body, loc = document.getElementById('_lo_count'); if (!loc) { head.innerHTML += '<style>#_loc{' + style + '}</style>'; loc = document.createElement('div'); loc.id = '_loc'; loc.innerHTML = '<span id=_lo_count></span>'; body.appendChild(loc); } socket.on('total', function (total) { loc.innerHTML = total; }); }); } } }()); We need to test our widget from multiple domains. We'll just implement a quick HTTP server (server.js) to serve index.html so we can access it by http://127.0.0.1:8080 and http://localhost:8080, as shown in the following code: var http = require('http'); var fs = require('fs'); var clientHtml = fs.readFileSync('index.html'); http.createServer(function (request, response) { response.writeHead(200, {'Content-type' : 'text/html'}); response.end(clientHtml); }).listen(8080); Finally, for the server for our widget, we write the following code in the widget_server.js file: var io = require('socket.io')(), totals = {}, clientScript = Buffer.concat([ require('socket.io/node_modules/socket.io-client').source, require('fs').readFileSync('widget_client.js') ]); io.static(false); io.attach(require('http').createServer(function(req, res){ res.setHeader('Content-Type', 'text/javascript; charset=utf-8'); res.write(sioclient.source); res.write(widgetScript); res.end(); }).listen(8081)); io.on('connection', function (socket) { var origin = socket.request.socket.domain || 'local'; totals[origin] = totals[origin] || 0; totals[origin] += 1; socket.join(origin); io.sockets.to(origin).emit('total', totals[origin]); socket.on('disconnect', function () { totals[origin] -= 1; io.sockets.to(origin).emit('total', totals[origin]); }); }); To test it, we need two terminals; in the first one, we execute the following command: node widget_server.js In the other terminal, we execute the following command: node server.js We point our browser to http://localhost:8080 by opening a new tab or window and navigating to http://localhost:8080. Again, we will see the counter rise by one. If we close either window, it will drop by one. We can also navigate to http://127.0.0.1:8080 to emulate a separate origin. The counter at this address is independent from the counter at http://localhost:8080. How it works... The widget_server.js file is the powerhouse of this recipe. We start by using require with socket.io and calling it (note the empty parentheses following require); this becomes our io instance. Under this is our totals object; we'll be using this later to store the total number of connected clients for each domain. Next, we create our clientScript variable; it contains both the socket.io client code and our widget_client.js code. We'll be serving this to all HTTP requests. Both scripts are stored as buffers, not strings. We could simply concatenate them with the plus (+) operator; however, this would force a string conversion first, so we use Buffer.concat instead. Anything that is passed to res.write or res.end is converted to a Buffer before being sent across the wire. Using the Buffer.concat method means our data stays in buffer format the whole way through instead of being a buffer, then a string then a buffer again. When we require socket.io at the top of widget_server.js, we call it to create an io instance. Usually, at this point, we would pass in an HTTP server instance or else a port number, and optionally pass in an options object. To keep our top variables tidy, however, we use some configuration methods available on the io instance after all our requires. The io.static(false) call prevents socket.io from providing its client-side code (because we're providing our own concatenated script file that contains both the socket.io client-side code and our widget code). Then we use the io.attach call to hook up our socket.io server with an HTTP server. All requests that use the http:// protocol will be handled by the server we pass to io.attach, and all ws:// protocols will be handled by socket.io (whether or not the browser supports the ws:// protocol). We're only using the http module once, so we require it within the io.attach call; we use it's createServer method to serve all requests with our clientScript variable. Now, the stage is set for the actual socket action. We wait for a connection by listening for the connection event on io.sockets. Inside the event handler, we use a few as yet undiscussed socket.io qualities. WebSocket is formed when a client initiates a handshake request over HTTP and the server responds affirmatively. We can access the original request object with socket.request. The request object itself has a socket (this is the underlying HTTP socket, not our socket.io socket; we can access this via socket.request.socket. The socket contains the domain a client request came from. We load socket.request.socket.domain into our origin object unless it's null or undefined, in which case we say the origin is 'local'. We extract (and simplify) the origin object because it allows us to distinguish between websites that use a widget, enabling site-specific counts. To keep count, we use our totals object and add a property for every new origin object with an initial value of 0. On each connection, we add 1 to totals[origin] while listening to our socket; for the disconnect event, we subtract 1 from totals[origin]. If these values were exclusively for server use, our solution would be complete. However, we need a way to communicate the total connections to the client, but on a site by site basis. Socket.IO has had a handy new feature since Socket.IO version 0.7 that allows us to group sockets into rooms by using the socket.join method. We cause each socket to join a room named after its origin, then we use the io.sockets.to(origin).emit method to instruct socket.io to only emit to sockets that belongs to the originating sites room. In both the io.sockets connection and socket disconnect events, we emit our specific totals to corresponding sockets to update each client with the total number of connections to the site the user is on. The widget_client.js file simply creates a div element called #_loc and updates it with any new totals it receives from widget_server.js. There's more... Let's look at how our app could be made more scalable, as well as looking at another use for WebSockets. Preparing for scalability If we were to serve thousands of websites, we would need scalable memory storage, and Redis would be a perfect fit. It operates in memory but also allows us to scale across multiple servers. We'll need Redis installed along with the Redis module. We'll alter our totals variable so it contains a Redis client instead of a JavaScript object: var io = require('socket.io')(), totals = require('redis').createClient(), //other variables Now, we modify our connection event handler as shown in the following code: io.sockets.on('connection', function (socket) { var origin = (socket.handshake.xdomain) ? url.parse(socket.handshake.headers.origin).hostname : 'local'; socket.join(origin); totals.incr(origin, function (err, total) { io.sockets.to(origin).emit('total', total); }); socket.on('disconnect', function () { totals.decr(origin, function (err, total) { io.sockets.to(origin).emit('total', total); }); }); }); Instead of adding 1 to totals[origin], we use the Redis INCR command to increment a Redis key named after origin. Redis automatically creates the key if it doesn't exist. When a client disconnects, we do the reverse and readjust totals using DECR. WebSockets as a development tool When developing a website, we often change something small in our editor, upload our file (if necessary), refresh the browser, and wait to see the results. What if the browser would refresh automatically whenever we saved any file relevant to our site? We can achieve this with the fs.watch method and WebSockets. The fs.watch method monitors a directory, executing a callback whenever a change to any files in the folder occurs (but it doesn't monitor subfolders). The fs.watch method is dependent on the operating system. To date, fs.watch has also been historically buggy (mostly under Mac OS X). Therefore, until further advancements, fs.watch is suited purely to development environments rather than production (you can monitor how fs.watch is doing by viewing the open and closed issues at https://github.com/joyent/node/search?q=fs.watch&ref=cmdform&state=open&type=Issues). Our development tool could be used alongside any framework, from PHP to static files. For the server counterpart of our tool, we'll configure watcher.js: var io = require('socket.io')(), fs = require('fs'), totals = {}, watcher = function () { var socket = io.connect('ws://localhost:8081'); socket.on('update', function () { location.reload(); }); }, clientScript = Buffer.concat([ require('socket.io/node_modules/socket.io-client').source, Buffer(';(' + watcher + '());') ]); io.static(false); io.attach(require('http').createServer(function(req, res){ res.setHeader('Content-Type', 'text/javascript; charset=utf-8'); res.end(clientScript); }).listen(8081)); fs.watch('content', function (e, f) { if (f[0] !== '.') { io.sockets.emit('update'); } }); Most of this code is familiar. We make a socket.io server (on a different port to avoid clashing), generate a concatenated socket.io.js plus client-side watcher code file, and deliver it via our attached server. Since this is a quick tool for our own development uses, our client-side code is written as a normal JavaScript function (our watcher variable), converted to a string while wrapping it in self-calling function code, and then changed to Buffer so it's compatible with Buffer.concat. The last piece of code calls the fs.watch method where the callback receives the event name (e) and the filename (f). We check that the filename isn't a hidden dotfile. During a save event, some filesystems or editors will change the hidden files in the directory, thus triggering multiple callbacks and sending several messages at high speed, which can cause issues for the browser. To use it, we simply place it as a script within every page that is served (probably using server-side templating). However, for demonstration purposes, we simply place the following code into content/index.html: <script src = http://localhost:8081/socket.io/watcher.js > </script> Once we fire up server.js and watcher.js, we can point our browser to http://localhost:8080 and see the familiar excited Yay!. Any changes we make and save (either to index.html, styles.css, script.js, or the addition of new files) will be almost instantly reflected in the browser. The first change we can make is to get rid of the alert box in the script.js file so that the changes can be seen fluidly. Summary We saw how we could create a real-time widget in this article. We also used some third-party modules to explore some of the potential of the powerful combination of Node and WebSockets. Resources for Article: Further resources on this subject: Understanding and Developing Node Modules [Article] So, what is Node.js? [Article] Setting up Node [Article]
Read more
  • 0
  • 0
  • 3142

article-image-setting-node
Packt
07 Aug 2013
10 min read
Save for later

Setting up Node

Packt
07 Aug 2013
10 min read
(For more resources related to this topic, see here.) System requirements Node runs on POSIX-like operating systems, the various UNIX derivatives (Solaris, and so on), or workalikes (Linux, Mac OS X, and so on), as well as on Microsoft Windows, thanks to the extensive assistance from Microsoft. Indeed, many of the Node built-in functions are direct corollaries to POSIX system calls. It can run on machines both large and small, including the tiny ARM devices such as the Raspberry Pi microscale embeddable computer for DIY software/hardware projects. Node is now available via package management systems, limiting the need to compile and install from source. Installing from source requires having a C compiler (such as GCC), and Python 2.7 (or later). If you plan to use encryption in your networking code you will also need the OpenSSL cryptographic library. The modern UNIX derivatives almost certainly come with these, and Node's configure script (see later when we download and configure the source) will detect their presence. If you should have to install them, Python is available at http://python.org and OpenSSL is available at http://openssl.org. Installing Node using package managers The preferred method for installing Node, now, is to use the versions available in package managers such as apt-get, or MacPorts. Package managers simplify your life by helping to maintain the current version of the software on your computer and ensuring to update dependent packages as necessary, all by typing a simple command such as apt-get update. Let's go over this first. Installing on Mac OS X with MacPorts The MacPorts project (http://www.macports.org/) has for years been packaging a long list of open source software packages for Mac OS X, and they have packaged Node. After you have installed MacPorts using the installer on their website, installing Node is pretty much this simple: $ sudo port search nodejs nodejs @0.10.6 (devel, net) Evented I/O for V8 JavaScript nodejs-devel @0.11.2 (devel, net) Evented I/O for V8 JavaScript Found 2 ports. -- npm @1.2.21 (devel) node package manager $ sudo port install nodejs npm .. long log of downloading and installing prerequisites and Node Installing on Mac OS X with Homebrew Homebrew is another open source software package manager for Mac OS X, which some say is the perfect replacement for MacPorts. It is available through their home page at http://mxcl.github.com/homebrew/. After installing Homebrew using the instructions on their website, using it to install Node is as simple as this: $ brew search node leafnode node $ brew install node ==> Downloading http://nodejs.org/dist/v0.10.7/node-v0.10.7.tar.gz ######################################################################## 100.0% ==> ./configure –prefix=/usr/local/Cellar/node/0.10.7 ==> make install ==> Caveats Homebrew installed npm. We recommend prepending the following path to your PATH environment variable to have npm-installed binaries picked up: /usr/local/share/npm/bin ==> Summary /usr/local/Cellar/node/0.10.7: 870 files, 16M, built in 21.9 minutes Installing on Linux from package management systems While it's still premature for Linux distributions or other operating systems to prepackage Node with their OS, that doesn't mean you cannot install it using the package managers. Instructions on the Node wiki currently list packaged versions of Node for Debian, Ubuntu, OpenSUSE, and Arch Linux. See: https://github.com/joyent/node/wiki/Installing-Node.js-via-package-manager For example, on Debian sid (unstable): # apt-get update # apt-get install nodejs # Documentation is great. And on Ubuntu: # sudo apt-get install python-software-properties # sudo add-apt-repository ppa:chris-lea/node.js # sudo apt-get update # sudo apt-get install nodejs npm We can expect in due course that the Linux distros and other operating systems will routinely bundle Node into the OS like they do with other languages today. Installing the Node distribution from nodejs.org The nodejs.org website offers prebuilt binaries for Windows, Mac OS X, Linux, and Solaris. You simply go to the website, click on the Install button, and run the installer. For systems with package managers, such as the ones we've just discussed, it's preferable to use that installation method. That's because you'll find it easier to stay up-to-date with the latest version. However, on Windows this method may be preferred. For Mac OS X, the installer is a PKG file giving the typical installation process. For Windows, the installer simply takes you through the typical install wizard process. Once finished with the installer, you have a command line tool with which to run Node programs. The pre-packaged installers are the simplest ways to install Node, for those systems for which they're available. Installing Node on Windows using Chocolatey Gallery Chocolatey Gallery is a package management system, built on top of NuGet. Using it requires a Windows machine modern enough to support the Powershell and the .NET Framework 4.0. Once you have Chocolatey Gallery (http://chocolatey.org/), installing Node is as simple as this: C:> cinst install nodejs Installing the StrongLoop Node distribution StrongLoop (http://strongloop.com) has put together a supported version of Node that is prepackaged with several useful tools. This is a Node distribution in the same sense in which Fedora or Ubuntu are Linux distributions. StrongLoop brings together several useful packages, some of which were written by StrongLoop. StrongLoop tests the packages together, and distributes installable bundles through their website. The packages in the distribution include Express, Passport, Mongoose, Socket.IO, Engine.IO, Async, and Request. We will use all of those modules in this book. To install, navigate to the company home page and click on the Products link. They offer downloads of precompiled packages for both RPM and Debian Linux systems, as well as Mac OS X and Windows. Simply download the appropriate bundle for your system. For the RPM bundle, type the following: $ sudo rpm -i bundle-file-name For the Debian bundle, type the following: $ sudo dpkg -i bundle-file-name The Windows or Mac bundles are the usual sort of installable packages for each system. Simply double-click on the installer bundle, and follow the instructions in the install wizard. Once StrongLoop Node is installed, it provides not only the nodeand npmcommands (we'll go over these in a few pages), but also the slnodecommand. That command offers a superset of the npmcommands, such as boilerplate code for modules, web applications, or command-line applications. Installing from source on POSIX-like systems Installing the pre-packaged Node distributions is currently the preferred installation method. However, installing Node from source is desirable in a few situations: It could let you optimize the compiler settings as desired It could let you cross-compile, say for an embedded ARM system You might need to keep multiple Node builds for testing You might be working on Node itself Now that you have the high-level view, let's get our hands dirty mucking around in some build scripts. The general process follows the usual configure, make, and makeinstallroutine that you may already have performed with other open source software packages. If not, don't worry, we'll guide you through the process. The official installation instructions are in the Node wiki at https://github.com/joyent/node/wiki/Installation. Installing prerequisites As noted a minute ago, there are three prerequisites, a C compiler, Python, and the OpenSSL libraries. The Node installation process checks for their presence and will fail if the C compiler or Python is not present. The specific method of installing these is dependent on your operating system. These commands will check for their presence: $ cc --version i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5666) (dot 3) Copyright (C) 2007 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ python Python 2.6.6 (r266:84292, Feb 15 2011, 01:35:25) [GCC 4.2.1 (Apple Inc. build 5664)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> Installing developer tools on Mac OS X The developer tools (such as GCC) are an optional installation on Mac OS X. There are two ways to get those tools, both of which are free. On the OS X installation DVD is a directory labeled Optional Installs, in which there is a package installer for—among other things—the developer tools, including Xcode. The other method is to download the latest copy of Xcode (for free) from http://developer.apple.com/xcode/. Most other POSIX-like systems, such as Linux, include a C compiler with the base system. Installing from source for all POSIX-like systems First, download the source from http://nodejs.org/download. One way to do this is with your browser, and another way is as follows: $ mkdir src $ cd src $ wget http://nodejs.org/dist/v0.10.7/node-v0.10.7.tar.gz $ tar xvfz node-v0.10.7.tar.gz $ cd node-v0.10.7 The next step is to configure the source so that it can be built. It is done with the typical sort of configure script and you can see its long list of options by running the following: $ ./configure –help. To cause the installation to land in your home directory, run it this way: $ ./configure –prefix=$HOME/node/0.10.7 ..output from configure If you want to install Node in a system-wide directory simply leave off the -prefixoption, and it will default to installing in /usr/local. After a moment it'll stop and more likely configure the source tree for installation in your chosen directory. If this doesn't succeed it will print a message about something that needs to be fixed. Once the configure script is satisfied, you can go on to the next step. With the configure script satisfied, compile the software: $ make .. a long log of compiler output is printed $ make install If you are installing into a system-wide directory do the last step this way instead: $ make $ sudo make install Once installed you should make sure to add the installation directory to your PATHvariable as follows: $ echo 'export PATH=$HOME/node/0.10.7/bin:${PATH}' >>~/.bashrc $ . ~/.bashrc For cshusers, use this syntax to make an exported environment variable: $ echo 'setenv PATH $HOME/node/0.10.7/bin:${PATH}' >>~/.cshrc $ source ~/.cshrc This should result in some directories like this: $ ls ~/node/0.10.7/ bin include lib share $ ls ~/node/0.10.7/bin node node-waf npm Maintaining multiple Node installs simultaneously Normally you won't have multiple versions of Node installed, and doing so adds complexity to your system. But if you are hacking on Node itself, or are testing against different Node releases, or any of several similar situations, you may want to have multiple Node installations. The method to do so is a simple variation on what we've already discussed. If you noticed during the instructions discussed earlier, the –prefixoption was used in a way that directly supports installing several Node versions side-by-side in the same directory: $ ./configure –prefix=$HOME/node/0.10.7 And: $ ./configure –prefix=/usr/local/node/0.10.7 This initial step determines the install directory. Clearly when Version 0.10.7, Version 0.12.15, or whichever version is released, you can change the install prefix to have the new version installed side-by-side with the previous versions. To switch between Node versions is simply a matter of changing the PATHvariable (on POSIX systems), as follows: $ export PATH=/usr/local/node/0.10.7/bin:${PATH} It starts to be a little tedious to maintain this after a while. For each release, you have to set up Node, npm, and any third-party modules you desire in your Node install; also the command shown to change your PATHis not quite optimal. Inventive programmers have created several version managers to make this easier by automatically setting up not only Node, but npmalso, and providing commands to change your PATHthe smart way: Node version manager: https://github.com/visionmedia/n Nodefront, aids in rapid frontend development: http://karthikv.github.io/nodefront/
Read more
  • 0
  • 0
  • 3130

article-image-agile-works-best-php-projects-2
Packt
30 Sep 2009
8 min read
Save for later

Agile Works Best in PHP Projects

Packt
30 Sep 2009
8 min read
What is agility Agility includes effective, that is, rapid and adaptive, response to change. This requires effective communication among all of the stakeholders. Stakeholders are those who are going to benefit from the project in some form or another. The key stakeholders of the project include the developers and the users. Leaders of the customer organization, as well as the leaders of the software development organizations, are also among the stakeholders. Rather than keeping the customer away, drawing the customer into the team helps the team to be more effective. There can be various types of customers, some are annoying, and some who tend to forget what they once said. There are also those who will help steer the project in the right direction. The idea of drawing the customer into the team is not to let them micromanage the team. Rather, it is for them to help the team to understand the user requirements better. This needs to be explained to the customers up front, if they seem to hinder the project, rather than trying to help in it. After all, it is the team that consists of the technical experts, so the customer should understand this. Organizing a team, in such a manner so that it is in control of the work performed, is also an important part of being able to adapt to change effectively. The team dynamics will help us to respond to changes in a short period of time without any major frictions. Agile processes are based on three key assumptions. These assumptions are as follows: It is difficult to predict in advance, which requirements or customer priorities will change and which will not. For many types of software, design and construction activities are interweaved. We can use construction to prove the design. Analysis, design, and testing are not as predictable from the planning's perspective as we software developers like them to be. To manage unpredictability, the agile process must be adapted incrementally by the project's team. Incremental adaptation requires customer's feedback. Based on the evaluation of delivered software, it increments or executes prototypes over short time periods. The length of the time periods should be selected based on the nature of the user requirements. It is ideal to restrict the length of a delivery to get incremented by two or three weeks. Agility yields rapid, incremental delivery of software. This makes sure that the client will get to see the real up-and-running software in quick time. Characteristics of an agile process An agile process is driven by the customer's demand. In other words, the process that is delivered is based on the users' descriptions of what is required. What the project's team builds is based on the user-given scenarios. The agile process also recognizes that plans are short lived. What is more important is to meet the users' requirements. Because the real world keeps on changing, plans have little meaning. Still, we cannot eliminate the need for planning. Constant planning will make sure that we will always be sensitive to where we're going, compared to where we are. Developing software iteratively, with a greater emphasis on construction activities, is another characteristic of the agile process. Construction activities make sure that we have something working all of the time. Activities such as requirements gathering for system modeling are not construction activities. Those activities, even though they're useful, do not deliver something tangible to the users. On the other hand, activities such as design, design prototyping, implementation, unit testing, and system testing are activities that deliver useful working software to the users. When our focus is on construction activities, it is a good practice that we deliver the software in multiple software increments. This gives us more time to incorporate user feedback, as we go deeper into implementing the product. This ensures that the team will deliver a high quality product at the end of the project's life cycle because the latter increments of software are based on clearly-understood requirements. This is as opposed to those, which would have been delivered with partially understood requirements in the earlier increments. As we go deep into the project's life cycle, we can adopt the project's team as well as the designs and the PHP code that we implement as changes occur. Principles of agility Our highest priority is to satisfy the customer through early and continuous delivery of useful and valuable software. To meet this requirement, we need to be able to embrace changes. We welcome changing requirements, even late in development life cycle. Agile processes leverage changes for the customer's competitive advantage. In order to attain and sustain competitive advantage over the competitors, the customer needs to be able to change the software system that he or she uses for the business at the customer's will. If the software is too rigid, there is no way that we can accommodate agility in the software that we develop. Therefore, not only the process, but also the product, needs to be equipped with agile characteristics. In addition, the customer will need to have new features of the software within a short period of time. This is required to beat the competitors with state of the art software system that facilitate latest business trends. Therefore, deliver the working software as soon as possible. A couple of weeks to a couple of months are always welcome. For example, the customer might want to improve the reports that are generated at the presentation layer based on the business data. Moreover, some of this business data will not have been captured in the data model in the initial design. Still, as the software development team, we need to be able to upgrade the design and implement the new set of reports using PHP in a very short period of time. We cannot afford to take months to improve the reports. Also, our process should be such that we will be able to accommodate this change and deliver it within a short period of time. In order to make sure that we can understand these types of changes, we need to make the business people and the developers daily work together throughout the project. When these two parties work together, it becomes very easy for them to understand each other. The team members are the most important resource in a software project. The motivation and attitude of these team members can be considered the most important aspect that will determine the success of the project. If we build the project around motivated individuals, give them the environment and support they need, trust them to get the job done, the project will be a definite success. Obviously, the individual team members need to work with each other in order to make the project a success. The most efficient and effective method of conveying information to and within a development team is a face-to-face conversation. Even though various electronic forms of communication, such as instant messaging, emails, and forums makes effective communication possible, there is nothing comparable to face-to-face communication. When it comes to evaluating progress, working software should be the primary measure of progress. We need to make sure that we clearly communicate this to all of the team members. They should always focus on making sure that the software they develop is in a working state at all times. It is not a bad idea to tie their performance reviews and evaluations based on how much effort they have put in. This is in order to make sure that whatever they deliver (software) is working all of the time. An agile process promotes sustainable development. This means that people are not overworked, and they are not under stress in any condition. The sponsors, managers, developers, and users should be able to maintain a constant pace of development, testing, evaluation, and evolution, indefinitely. The team should pay continuous attention to technical excellence. This is because good design enhances agility. Technical reviews with peers and non-technical reviews with users will allow corrective action to any deviations from the expected result. Aggressively seeking technical excellence will make sure that the team will be open minded and ready to adopt corrective action based on feedback. With PHP, simplicity is paramount. Simplicity should be used as the art of maximizing the amount of work that is not done. In other words, it is essential that we prevent unwanted wasteful work, as well as rework, at all costs. PHP is a very good vehicle to achieve this. The team members that we have should be smart and capable. If we can get those members to reflect on how to become more effective, at regular intervals, we can get the team to tune and adjust its behavior to enhance the process over time. The best architectures, requirements, and designs emerge from self-organizing teams. Therefore, for a high quality product, the formation of the team can have a direct impact.
Read more
  • 0
  • 0
  • 3124

article-image-ejb-31-working-interceptors
Packt
06 Jul 2011
3 min read
Save for later

EJB 3.1: Working with Interceptors

Packt
06 Jul 2011
3 min read
EJB 3.1 Cookbook Build real world EJB solutions with a collection of simple but incredibly effective recipes with this book and eBook        The recipes in this article are based largely around a conference registration application as developed in the first recipe of the previous article on Introduction to Interceptors. It will be necessary to create this application before the other recipes in this article can be demonstrated. Using interceptors to enforce security While security is an important aspect of many applications, the use of programmatic security can clutter up business logic. The use of declarative annotations has come a long way in making security easier to use and less intrusive. However, there are still times when programmatic security is necessary. When it is, then the use of interceptors can help remove the security code from the business logic. Getting ready The process for using an interceptor to enforce security involves: Configuring and enabling security for the application server Adding a @DeclareRoles to the target class and the interceptor class Creating a security interceptor How to do it... Configure the application to handle security as detailed in Configuring the server to handle security recipe. Add the @DeclareRoles("employee") to the RegistrationManager class. Add a SecurityInterceptor class to the packt package. Inject a SessionContext object into the class. We will use this object to perform programmatic security. Also use the @DeclareRoles annotation. Next, add an interceptor method, verifyAccess, to the class. Use the SessionContext object and its isCallerInRole method to determine if the user is in the "employee" role. If so, invoke the proceed method and display a message to that effect. Otherwise, throw an EJBAccessException. @DeclareRoles("employee") public class SecurityInterceptor { @Resource private SessionContext sessionContext; @AroundInvoke public Object verifyAccess(InvocationContext context) throws Exception { System.out.println("SecurityInterceptor: Invoking method: " + context.getMethod().getName()); if (sessionContext.isCallerInRole("employee")) { Object result = context.proceed(); System.out.println("SecurityInterceptor: Returned from method: " + context.getMethod().getName()); return result; } else { throw new EJBAccessException(); } } } Execute the application. The user should be prompted for a username and password as shown in the following screenshot. Provide a user in the employee role. The application should execute to completion. Depending on the interceptors in place, you will console output similar to the following: INFO: Default Interceptor: Invoking method: register INFO: SimpleInterceptor entered: register INFO: SecurityInterceptor: Invoking method: register INFO: InternalMethod: Invoking method: register INFO: register INFO: Default Interceptor: Invoking method: create INFO: Default Interceptor: Returned from method: create INFO: InternalMethod: Returned from method: register INFO: SecurityInterceptor: Returned from method: register INFO: SimpleInterceptor exited: register INFO: Default Interceptor: Returned from method: register How it works... The @DeclareRoles annotation was used to specify that users in the employee role are associated with the class. The isCallerInRole method checked to see if the current user is in the employee role. When the target method is called, if the user is authorized then the InterceptorContext's proceed method is executed. If the user is not authorized, then the target method is not invoked and an exception is thrown. See also EJB 3.1: Controlling Security Programmatically Using JAAS  
Read more
  • 0
  • 0
  • 3118
article-image-looking-apache-axis2
Packt
09 Oct 2009
11 min read
Save for later

Looking into Apache Axis2

Packt
09 Oct 2009
11 min read
(For more resources on Axis2, see here.) Axis2 Architecture Axis2 is built upon a modular architecture that consists of core modules and non-core modules. The core engine is said to be a pure SOAP processing engine (there is not any JAX-PRC concept burnt into the core). Every message coming into the system has to be transformed into a SOAP message before it is handed over to the core engine. An incoming message can either be a SOAP message or a non-SOAP message (REST JSON or JMX). But at the transport level, it will be converted into a SOAP message. When Axis2 was designed, the following key rules were incorporated into the architecture. These rules were mainly applied to achieve a highly flexible and extensible SOAP processing engine: Separation of logic and state to provide a stateless processing mechanism. (This is because Web Services are stateless.) A single information model in order to enable the system to suspend and resume. Ability to extend support to newer Web Service specifications with minimal changes made to the core architecture. The figure below shows all the key components in Axis2 architecture (including core components as well as non-core components). Core Modules XML Processing Model : Managing or processing the SOAP message is the most diffcult part of the execution of a message. The efficiency of message processing is the single most important factor that decides the performance of the entire system. Axis1 uses DOM as its message representation mechanism. However, Axis2 introduced a fresh XML InfoSet-based representation for SOAP messages. It is known as AXIOM (AXIs Object Model). AXIOM encapsulates the complexities of efficient XML processing within the implementation. SOAP Processing Model :  This model involves the processing of an incoming SOAP message. The model defines the different stages (phases) that the execution will walk through. The user can then extend the processing model in specific places. Information Model :  This keeps both static and dynamic states and has the logic to process them. The information model consists of two hierarchies to keep static and run-time information separate. Service life cycle and service session management are two objectives in the information model. Deployment Model :  The deployment model allows the user to easily deploy the services, configure the transports, and extend the SOAP Processing Model. It also introduces newer deployment mechanisms in order to handle hot deployment, hot updates, and J2EE-style deployment. Client API : This provides a convenient API for users to interact with Web Services using Axis2. The API consists of two sub-APIs, for average and advanced users. Axis2 default implementation supports all the eight MEPs (Message Exchange Patterns) defined in WSDL 2.0. The API also allows easy extension to support custom MEPs. Transports :  Axis2 defines a transport framework that allows the user to use and expose the same service in multiple transports. The transports fit into specific places in the SOAP processing model. The implementation, by default, provides a few common transports (HTTP, SMTP, JMX, TCP and so on). However, the user can write or plug-in custom transports, if needed. XML Processing Model Axis2 is built on a completely new architecture as compared to Axis 1.x. One of the key reasons for introducing Axis2 was to have a better, and an efficient XML processing model. Axis 1.x used DOM as its XML representation mechanism, which required the complete object hierarchy (corresponding to incoming message) to be kept in memory. This will not be a problem for a message of small size. But when it comes to a message of large size, it becomes an issue. To overcome this problem, Axis2 has introduced a new XML representation. AXIOM (AXIs Object Model) forms the basis of the XML representation for every SOAP-based message in Axis2. The advantage of AXIOM over other XML InfoSet representations is that it is based on the PULL parser technique, whereas most others are based on the PUSH parser technique. The main advantage of PULL over PUSH is that in the PULL technique, the invoker has full control over the parser and it can request the next event and act upon that, whereas in case of PUSH, the parser has limited control and delegates most of the functionality to handlers that respond to the events that are fired during its processing of the document. Since AXIOM is based on the PULL parser technique, it has on-demand-building capability whereby it will build an object model only if it is asked to do so. If required, one can directly access the underlying PULL parser from AXIOM, and use that rather than build an OM (Object Model). SOAP Processing Model Sending and receiving SOAP messages can be considered two of the key jobs of the SOAP-processing engine. The architecture in Axis2 provides two Pipes ('Flows'), in order to perform two basic actions. The AxisEngine or driver of Axis2 defines two methods, send() and receive() to implement these two Pipes. The two pipes are namedInFlow and OutFlow.   The complex Message Exchange Patterns (MEPs) are constructed by combining these two types of pipes. It should be noted that in addition to these two pipes there are two other pipes as well, and those two help in handling incoming Fault messages and sending a Fault message. Extensibility of the SOAP processing model is provided through handlers. When a SOAP message is being processed, the handlers that are registered will be executed. The handlers can be registered in global, service, or in operation scopes, and the final handler chain is calculated by combining the handlers from all the scopes. The handlers act as interceptors, and they process parts of the SOAP message and provide the quality of service features (a good example of quality of service is security or reliability). Usually handlers work on the SOAP headers; but they may access or change the SOAP body as well. The concept of a flow is very simple and it constitutes a series of phases wherein a phase refers to a collection of handlers. Depending on the MEP for a given method invocation, the number of flows associated with it may vary. In the case of an in-only MEP, the corresponding method invocation has only one pipe, that is, the message will only go through the in pipe (inflow). On the other hand, in the case of in-out MEP, the message will go through two pipes, that is the in pipe (inflow) and the out pipe (outflow). When a SOAP message is being sent, an OutFlow begins. The OutFlow invokes the handlers and ends with a Transport Sender that sends the SOAP message to the target endpoint. The SOAP message is received by a Transport Receiver at the target endpoint, which reads the SOAP message and starts the InFlow. The InFlow consists of handlers and ends with the Message Receiver, which handles the actual business logic invocation. A phase is a logical collection of one or more handlers, and sometimes a phase itself acts as a handler. Axis2 introduced the phase concept as an easy way of extending core functionalities. In Axis 1.x, we need to change the global configuration files if we want to add a handler into a handler chain. But Axis2 makes it easier by using the concept of phases and phase rules. Phase rules specify how a given set of handlers, inside a particular phase, are ordered. The figure below illustrates a flow and its phases. If the message has gone through the execution chain without having any problem, then the engine will hand over the message to the message receiver in order to do the business logic invocation, After this, it is up to the message receiver to invoke the service and send the response, if necessary. The figure below shows how the Message Receiver fits into the execution chain. The two pipes do not differentiate between the server and the client. The SOAP processing model handles the complexity and provides two abstract pipes to the user. The different areas or the stages of the pipes are named 'phases' in Axis2. A handler always runs inside a phase, and the phase provides a mechanism to specify the ordering of handlers. Both pipes have built-in phases, and both define the areas for User Phases, which can be defined by the user, as well. Information Model As  shown  in  the figure below, the information model consists of two hierarchies: Description hierarchy and Context hierarchy. The Description hierarchy represents the static data that may come from different deployment descriptors. If hot deployment is turned off, then the description hierarchy is not likely to change. If hot deployment is turned on, then we can deploy the service while the system is up and running. In this case, the description hierarchy is updated with the corresponding data of the service. The context hierarchy keeps run-time data. Unlike the description hierarchy, the context hierarchy keeps on changing when the server starts receiving messages. These two hierarchies create a model that provides the ability to search for key value pairs. When the values are to be searched for at a given level, they are searched while moving up the hierarchy until a match is found. In the resulting model, the lower levels override the values present in the upper levels. For example, when a value has been searched for in the Message Context and is not found, then it would be searched in the Operation Context, and so on. The search is first done up the hierarchy, and if the starting point is a Context then it would search for in the Description hierarchy as well. This allows the user to declare and override values, with the result being a very flexible configuration model. The flexibility could be the Achilles' heel of the system, as the search is expensive, especially for something that does not exist. Deployment Model The previous versions of Axis failed to address the usability factor involved in the deployment of a Web Service. This was due to the fact that Axis 1.x was released mainly to prove the Web Service concepts. Therefore in Axis 1.x, the user has to manually invoke the admin client and update the server classpath. Then, you need to restart the server in order to apply the changes. This burdensome deployment model was a definite barrier for beginners. Axis2 is engineered to overcome this drawback, and provide a flexible, user-friendly, easily configurable deployment model. Axis2 deployment introduced a J2EE-like deployment mechanism, wherein the developer can bundle all the class files, library files, resources files, and configuration fi  les together as an archive file, and drop it in a specified location in the file system. The concept of hot deployment and hot update is not a new technical paradigm, particularly for the Web Service platform. But in the case of Apache Axis, it is a new feature. Therefore, when Axis2 was developed, hot deployment features were added to the feature list. Hot deployment : This refers to the capability to deploy services while the system is up and running. In a real time system or in a business environment, the availability of the system is very important. If the processing of the system is slow, even for a moment, then the loss might be substantial and it may affect the viability of the business. In the meanwhile, it is required to add new service to the system. If this can be done without needing to shut down the servers, it will be a great achievement. Axis2 addresses this issue and provides a Web Service hot deployment ability, wherein we need not shut down the system to deploy a new Web Service. All that needs to be done is to drop the required Web Service archive into the services directory in the repository. The deployment model will automatically deploy the service and make it available. Hot update : This refers to the ability to make changes to an existing Web Service without even shutting down the system. This is an essential feature, which is best suited to use in a testing environment. It is not advisable to use hot updates in a real-time system, because a hot update could lead a system into an unknown state. Additionally, there is the possibility of loosening the existing service data of that service. To prevent this, Axis2 comes with the hot update parameter set to FALSE by default.
Read more
  • 0
  • 0
  • 3114

Packt
24 Sep 2015
8 min read
Save for later

Snap – The Code Snippet Sharing Application

Packt
24 Sep 2015
8 min read
In this article by Joel Perras, author of the book Flask Blueprints, we will build our first fully functional, database-backed application. This application with the codename, Snap, will allow users to create an account with a username and password. In this account, users will be allowed to add, update, and delete the so-called semiprivate snaps of text (with a focus on lines of code) that can be shared with others. For this you should be familiar with at least one of the following relational database systems: PostgreSQL, MySQL, or SQLite. Additionally, some knowledge of the SQLAlchemy Python library, which acts as an abstraction layer and object-relational mapper for these (and several other) databases, will be an asset. If you are not well versed in the usage of SQLAlchemy, fear not. We will have a gentle introduction to the library that will bring the new developers up to speed and serve as a refresher for the more experienced folks. The SQLite database will be our relational database of choice due to its very simple installation and operation. The other database systems that we listed are all client/server-based with a multitude of configuration options that may need adjustment depending on the system they are installed in, while SQLite's default mode of operation is self-contained, serverless, and zero-configuration. Any major relational database supported by SQLAlchemy as a first-class citizen will do. (For more resources related to this topic, see here.) Diving In To make sure things start correctly, let's create a folder where this project will exist and a virtual environment to encapsulate any dependencies that we will require: $ mkdir -p ~/src/snap && cd ~/src/snap $ mkvirtualenv snap -i flask This will create a folder called snap at the given path and take us to this newly created folder. It will then create the snap virtual environment and install Flask in this environment. Remember that the mkvirtualenv tool will create the virtual environment, which will be the default set of locations to install the packages from pip, but the mkvirtualenv command does not create the project folder for you. This is why we will run a command to create the project folder first and then create the virtual environment. Virtual environments, by virtue of the $PATH manipulation performed once they are activated, are completely independent of where in your file system your project files exist. We will then create our basic blueprint-based project layout with an empty users blueprint: application ├── __init__.py ├── run.py └── users ├── __init__.py ├── models.py └── views.py Flask-SQLAlchemy Once this has been established, we need to install the next important set of dependencies: SQLAlchemy, and the Flask extension that makes interacting with this library a bit more Flask-like, Flask-SQLAlchemy: $ pip install flask-sqlalchemy This will install the Flask extension to SQLAlchemy along with the base distribution of the latter and several other necessary dependencies in case they are not already present. Now, if we were using a relational database system other than SQLite, this is the point where we would create the database entity in, say, PostgreSQL along with the proper users and permissions so that our application can create tables and modify the contents of these tables. SQLite, however, does not require any of that. Instead, it assumes that any user that has access to the filesystem location that the database is stored in should also have permission to modify the contents of this database. For the sake of completeness, however, here is how one would create an empty database in the current folder of your filesystem: $ sqlite3 snap.db # hit control-D to escape out of the interactive SQL console if necessary.   As mentioned previously, we will be using SQLite as the database for our example applications and the directions given will assume that SQLite is being used; the exact name of the binary may differ on your system. You can substitute the equivalent commands to create and administer the database of your choice if anything other than SQLite is being used. Now, we can begin the basic configuration of the Flask-SQLAlchemy extension. Configuring Flask-SQLAlchemy First, we must register the Flask-SQLAlchemy extension with the application object in the application/__init__.py: from flask import Flask fromflask.ext.sqlalchemy import SQLAlchemy app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///../snap.db' db = SQLAlchemy(app) The value of app.config['SQLALCHEMY_DATABASE_URI'] is the escaped relative path to the snap.db SQLite database that we created previously. Once this simple configuration is in place, we will be able to create the SQLite database automatically via the db.create_all() method, which can be invoked in an interactive Python shell: $ python >>>from application import db >>>db.create_all() This should be an idempotent operation, which means that nothing would change even if the database already exists. If the local database file did not exist, however, it would be created. This also applies to adding new data models: running db.create_all() will add their definitions to the database, ensuring that the relevant tables have been created and are accessible. It does not, however, take into account the modification of an existing model/table definition that already exists in the database. For this, you will need to use the relevant tools (for example, the sqlite CLI) to modify the corresponding table definitions to match those that have been updated in your models or use a more general schema tracking and updating tool such as Alembic to do the majority of the heavy lifting for you. SQLAlchemy basics SQLAlchemy is, first and foremost, a toolkit to interact with the relational databases in Python. While it provides an incredible number of features—including the SQL connection handling and pooling for various database engines, ability to handle custom datatypes, and a comprehensive SQL expression API—the one feature that most developers are familiar with is the Object Relational Mapper. This mapper allows a developer to connect a Python object definition to a SQL table in the database of their choice, thus allowing them the flexibility to control the domain models in their own application and requiring only minimal coupling to the database product and the engine-specific SQLisms that each of them exposes. While debating the usefulness (or the lack thereof) of an object relational mapper is outside the scope of for those who are unfamiliar with SQLAlchemy we will provide a list of benefits that using this tool brings to the table, as follows: Your domain models are written to interface with one of the most well-respected, tested, and deployed Python packages ever created—SQLAlchemy. Onboarding new developers to a project becomes an order of magnitude easier due to the extensive documentation, tutorials, books, and articles that have been written about using SQLAlchemy. Import-time validation of queries written using the SQLAlchemy expression language; instead of having to execute each query string against the database to determine if there is a syntax error present. The expression language is in Python and can thus be validated with your usual set of tools and IDE. Thanks to the implementation of design patterns such as the Unit of Work, the Identity Map, and various lazy loading features, the developer can often be saved from performing more database/network roundtrips than necessary. Considering that the majority of a request/response cycle in a typical web application can easily be attributed to network latency of one form or another, minimizing the number of database queries in a typical response is a net performance win on many fronts. While many successful, performant applications can be built entirely on the ORM, SQLAlchemy does not force it upon you. If, for some reason, it is preferable to write raw SQL query strings or to use the SQLAlchemy expression language directly, then you can do that and still benefit from the connection pooling and the Python DBAPI abstraction functionality that is the core of SQLAlchemy itself. Now that we've given you several reasons why you should be using this database query and domain data abstraction layer, let's look at how we would go about defining a basic data model. Summary After having gone through this article we have seen several facets of how Flask may be augmented with the use of extensions. While Flask itself is relatively spartan, the ecology of extensions that are available make it such that building a fully fledged user-authenticated application may be done quickly and relatively painlessly. Resources for Article: Further resources on this subject: Creating Controllers with Blueprints[article] Deployment and Post Deployment [article] Man, Do I Like Templates! [article]
Read more
  • 0
  • 0
  • 3109

article-image-moodle-19-working-mind-maps
Packt
30 Jun 2010
7 min read
Save for later

Moodle 1.9: Working with Mind Maps

Packt
30 Jun 2010
7 min read
In this virtual classroom, we are going to enrich the use of vocabulary, because in the creation of these techniques we have to use keywords, which have to be used in a piece of writing. Mind maps are going to be designed according to the facilities that the different software provides us to exploit them. Pictures in mind maps—using Buzan's iMindMap V4 In this task, we are going to use the software of the inventor of mind maps: Buzan's iMindMap V4. We are going to work on the topic of robots and afterwards students are going to write an article about them. We are going to provide students with images of different robots, taking into account that a robot is not a silver rectangular human look-alike. They may have several shapes and can be used for different purposes. Read the next screenshot, which is taken from Buzan's iMindMap V4 software, about inserting images in a mind map: Getting ready Let's create a mind map related to robots with pictures. After creating the mind map, students are going to look at it and they are going to write an article about the topic. In this case, the mind map will be designed with images only so as to "trigger associations within the brain" of our students. You can download a free trial of this software from the following webpage: http://www.thinkbuzan.com/uk/. How to do it... After downloading the free trial (you may also buy the software), create a new file. Then follow these steps to create a mind map with images using the previously mentioned software: Choose a central image in order to write the name of the topic in the middle, as shown in the next screenshot: In Enter some text for your central idea,, enter Robots as shown in the previous screenshot and click on Create. Click on Draw and select Organic, and draw the lines of the mind map, as shown in the following screenshot: To add images to the mind map, click on Insert and select Floating image, as shown in the next screenshot: Click on View and select Image Library and search for images, as shown in the next screenshot: Another option is to look for an image in Microsoft Word and copy and paste the images in the mind map. Save the file. How it works... We are going to select the Weekly outline section where we want to insert the activity. Then we are going to create a link to a file. Later, we will ask students to upload a single file in order to carry out the writing activity. Follow these steps: Click on Add a resource and select Link to a file or website. Complete the Name block. Complete the Summary block. Click on Choose or upload a file. Click on Upload a file. Click on Browse and search for the file, then click on Open. Click on Upload this file and then select Choose. In the Target block, select New window. Click on Save and return to course. The mind map appears as shown in the following screenshot: There's more... We saw how to create a mind map related to robots previously; now we will see how to upload this mind map as an image in your course. Uploading the mind map as .png file If your students do not have this software and they cannot open this file, you may upload this mind map in the Moodle course as an image. These are the steps that you have to follow: Open the file and fit the mind map in the screen. Press the Prt Scr key. Paste (Ctrl + V) the image in Paint or Inkscape (or any similar software). Select the section of the mind map only, as shown in the next screenshot: Save the image as .png so that you can upload the image of the mind map in the Moodle course. Drawing pictures using pen sketch It is also possible to use a digital pen, also known as pen sketch, to draw elements for the mind map. For example, as we are dealing with robots in this mind map, you can draw a robot's face and add it to the mind map, as shown in the next screenshot: Creating a writing activity You may add the mind map as a recourse in the Moodle course or you may insert an image in it. In both cases, students can write an article about robots. If you upload the mind map in the Moodle course, you can do it in the Description block of Upload a single file and you do not have to split the activity in two. Adding data to pictures—creating a mind map using MindMeister In this recipe, we are going to work with MindMeister software, which is free and open source.We are going to create a mind map, inserting links to websites, which contain information as well as pictures. Why? Because if we include more information in the mind map, we are going to lead our students on how to write. Apart from that, they are going to read more before writing and we are also exercising reading comprehension in a way. However, they may also summarize information if we create a link to a website. So let's get ready! Getting ready We are going to enter http://www.mindmeister.com/ and then Sign up for free. There is one version which is free to use, or you may choose the other two that are commercial. After signing up, we are going to develop a mind map for our students to work with. There is a video which is a tutorial explaining in a very simple and easy way on how to design a mind map using this software. So it is worth watching. How to do it... We are going to enter the previously mentioned website and we are going to start working on this new mind map. In this case, I have chosen the topic "Special days around the world". Follow these steps: Click on My New Mind Map and write the name of the topic in the block in the middle. Click on Connect and draw arrows, adding as many New node blocks as you wish. Add a website giving information for each special occasion. Click on the Node, then click on Extras–Links | Links and complete the URL block, as shown in the next screenshot: Then click on the checkmark icon. Repeat the same process for each occasion. You can add icons or images to the nodes of the mind map. Click on Share Map at the bottom of the page, as shown in the next screenshot: Click on Publish and change the button to ON, as shown in the next screenshot: Select Allow edit for everybody (WikiMap), as shown in the previous screenshot. You can also embed the mind map. When you click on Embed map, the next screenshot will appear: Copy the Embed code and click on Close. Click on OK. How it works... After creating the mind map about special occasions around the world, we will either embed it or create a link to a website for our students to work on a writing activity. Here the proposal is to work through a Wiki because in Map Properties we have clicked on Allow edit for everybody (WikiMap) so that students can modify the mind map with their ideas. Select the Weekly outline section where you want to insert the activity and these are the steps you have to follow: Click on Add an activity and select Wiki. Complete the Name block. Complete the Summary block. You may either embed the mind map or create a link to a website, as shown in the next screenshot: Click on Save and return to course.
Read more
  • 0
  • 0
  • 3102
article-image-building-remote-controlled-tv-node-webkit
Roberto González
04 Dec 2014
14 min read
Save for later

Building a Remote-controlled TV with Node-Webkit

Roberto González
04 Dec 2014
14 min read
Node-webkit is one of the most promising technologies to come out in the last few years. It lets you ship a native desktop app for Windows, Mac, and Linux just using HTML, CSS, and some JavaScript. These are the exact same languages you use to build any web app. You basically get your very own Frameless Webkit to build your app, which is then supercharged with NodeJS, giving you access to some powerful libraries that are not available in a typical browser. As a demo, we are going to build a remote-controlled Youtube app. This involves creating a native app that displays YouTube videos on your computer, as well as a mobile client that will let you search for and select the videos you want to watch straight from your couch. You can download the finished project from https://github.com/Aerolab/youtube-tv. You need to follow the first part of this guide (Getting started) to set up the environment and then run run.sh (on Mac) or run.bat (on Windows) to start the app. Getting started First of all, you need to install Node.JS (a JavaScript platform), which you can download from http://nodejs.org/download/. The installer comes bundled with NPM (Node.JS Package Manager), which lets you install everything you need for this project. Since we are going to be building two apps (a desktop app and a mobile app), it’s better if we get the boring HTML+CSS part out of the way, so we can concentrate on the JavaScript part of the equation. Download the project files from https://github.com/Aerolab/youtube-tv/blob/master/assets/basics.zip and put them in a new folder. You can name the project’s folder youtube-tv  or whatever you want. The folder should look like this: - index.html   // This is the starting point for our desktop app- css         // Our desktop app styles- js           // This is where the magic happens- remote       // This is where the magic happens (Part 2)- libraries   // FFMPEG libraries, which give you H.264 video support in Node-Webkit- player      // Our youtube player- Gruntfile.js // Build scripts- run.bat     // run.bat runs the app on Windows- run.sh       // sh run.sh runs the app on Mac Now open the Terminal (on Mac or Linux) or a new command prompt (on Windows) right in that folder. Now we’ll install a couple of dependencies we need for this project, so type these commands to install node-gyp and grunt-cli. Each one will take a few seconds to download and install: On Mac or Linux: sudo npm install node-gyp -gsudo npm install grunt-cli -g  On Windows: npm install node-gyp -gnpm install grunt-cli -g Leave the Terminal open. We’ll be using it again in a bit. All Node.JS apps start with a package.json file (our manifest), which holds most of the settings for your project, including which dependencies you are using. Go ahead and create your own package.json file (right inside the project folder) with the following contents. Feel free to change anything you like, such as the project name, the icon, or anything else. Check out the documentation at https://github.com/rogerwang/node-webkit/wiki/Manifest-format: {"//": "The // keys in package.json are comments.", "//": "Your project’s name. Go ahead and change it!","name": "Remote","//": "A simple description of what the app does.","description": "An example of node-webkit","//": "This is the first html the app will load. Just leave this this way","main": "app://host/index.html","//": "The version number. 0.0.1 is a good start :D","version": "0.0.1", "//": "This is used by Node-Webkit to set up your app.","window": {"//": "The Window Title for the app","title": "Remote","//": "The Icon for the app","icon": "css/images/icon.png","//": "Do you want the File/Edit/Whatever toolbar?","toolbar": false,"//": "Do you want a standard window around your app (a title bar and some borders)?","frame": true,"//": "Can you resize the window?","resizable": true},"webkit": {"plugin": false,"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Safari/537.36"}, "//": "These are the libraries we’ll be using:","//": "Express is a web server, which will handle the files for the remote","//": "Socket.io lets you handle events in real time, which we'll use with the remote as well.","dependencies": {"express": "^4.9.5","socket.io": "^1.1.0"}, "//": "And these are just task handlers to make things easier","devDependencies": {"grunt": "^0.4.5","grunt-contrib-copy": "^0.6.0","grunt-node-webkit-builder": "^0.1.21"}} You’ll also find Gruntfile.js, which takes care of downloading all of the node-webkit assets and building the app once we are ready to ship. Feel free to take a look into it, but it’s mostly boilerplate code. Once you’ve set everything up, go back to the Terminal and install everything you need by typing: npm installgrunt nodewebkitbuild You may run into some issues when doing this on Mac or Linux. In that case, try using sudo npm install and sudo grunt nodewebkitbuild. npm install installs all of the dependencies you mentioned in package.json, both the regular dependencies and the development ones, like grunt and grunt-nodewebkitbuild, which downloads the Windows and Mac version of node-webkit, setting them up so they can play videos, and building the app. Wait a bit for everything to install properly and we’re ready to get started. Note that if you are using Windows, you might get a scary error related to Visual C++ when running npm install. Just ignore it. Building the desktop app All web apps (or websites for that matter) start with an index.html file. We are going to be creating just that to get our app to run: <!DOCTYPE html><html><head><metacharset="utf-8"/><title>Youtube TV</title> <linkhref='http://fonts.googleapis.com/css?family=Roboto:500,400'rel='stylesheet'type='text/css'/><linkhref="css/normalize.css"rel="stylesheet"type="text/css"/><linkhref="css/styles.css"rel="stylesheet"type="text/css"/></head><body> <divid="serverInfo"><h1>Youtube TV</h1></div> <divid="videoPlayer"> </div> <script src="js/jquery-1.11.1.min.js"></script> <script src="js/youtube.js"></script><script src="js/app.js"></script> </body></html> As you may have noticed, we are using three scripts for our app: jQuery (pretty well known at this point), a Youtube video player, and finally app.js, which contains our app's logic. Let’s dive into that! First of all, we need to create the basic elements for our remote control. The easiest way of doing this is to create a basic web server and serve a small web app that can search Youtube, select a video, and have some play/pause controls so we don’t have any good reasons to get up from the couch. Open js/app.js and type the following: // Show the Developer Tools. And yes, Node-Webkit has developer tools built in! Uncomment it to open it automatically//require('nw.gui').Window.get().showDevTools(); // Express is a web server, will will allow us to create a small web app with which to control the playervar express = require('express');var app = express();var server = require('http').Server(app);var io = require('socket.io')(server); // We'll be opening up our web server on Port 8080 (which doesn't require root privileges)// You can access this server at http://127.0.0.1:8080var serverPort =8080;server.listen(serverPort); // All the static files (css, js, html) for the remote will be served using Express.// These assets are in the /remote folderapp.use('/', express.static('remote')); With those 7 lines of code (not counting comments) we just got a neat web server working on port 8080. If you were paying attention to the code, you may have noticed that we required something called socket.io. This lets us use websockets with minimal effort, which means we can communicate with, from, and to our remote instantly. You can learn more about socket.io at http://socket.io/. Let’s set that up next in app.js: // Socket.io handles the communication between the remote and our app in real time, // so we can instantly send commands from a computer to our remote and backio.on('connection', function (socket) { // When a remote connects to the app, let it know immediately the current status of the video (play/pause)socket.emit('statusChange', Youtube.status); // This is what happens when we receive the watchVideo command (picking a video from the list)socket.on('watchVideo', function (video) {// video contains a bit of info about our video (id, title, thumbnail)// Order our Youtube Player to watch that video   Youtube.watchVideo(video);}); // These are playback controls. They receive the “play” and “pause” events from the remotesocket.on('play', function () {   Youtube.playVideo();});socket.on('pause', function () {   Youtube.pauseVideo();}); }); // Notify all the remotes when the playback status changes (play/pause)// This is done with io.emit, which sends the same message to all the remotesYoutube.onStatusChange =function(status) {io.emit('statusChange', status);}; That’s the desktop part done! In a few dozen lines of code we got a web server running at http://127.0.0.1:8080 that can receive commands from a remote to watch a specific video, as well as handling some basic playback controls (play and pause). We are also notifying the remotes of the status of the player as soon as they connect so they can update their UI with the correct buttons (if it’s playing, show the pause button and vice versa). Now we just need to build the remote. Building the remote control The server is just half of the equation. We also need to add the corresponding logic on the remote control, so it’s able to communicate with our app. In remote/index.html, add the following HTML: <!DOCTYPE html><html><head><metacharset=“utf-8”/><title>TV Remote</title> <metaname="viewport"content="width=device-width, initial-scale=1, maximum-scale=1"/> <linkrel="stylesheet"href="/css/normalize.css"/><linkrel="stylesheet"href="/css/styles.css"/></head><body> <divclass="controls"><divclass="search"><inputid="searchQuery"type="search"value=""placeholder="Search on Youtube..."/></div><divclass="playback"><buttonclass="play">&gt;</button><buttonclass="pause">||</button></div></div> <divid="results"class="video-list"> </div> <divclass="__templates"style="display:none;"><articleclass="video"><figure><imgsrc=""alt=""/></figure> <divclass="info"><h2></h2></div> </article></div>  <script src="/socket.io/socket.io.js"></script><script src="/js/jquery-1.11.1.min.js"></script> <script src="/js/search.js"></script><script src="/js/remote.js"></script> </body></html> Again, we have a few libraries: Socket.io is served automatically by our desktop app at /socket.io/socket.io.js, and it manages the communication with the server. jQuery is somehow always there, search.js manages the integration with the Youtube API (you can take a look if you want), and remote.js handles the logic for the remote. The remote itself is pretty simple. It can look for videos on Youtube, and when we click on a video it connects with the app, telling it to play the video with socket.emit. Let’s dive into remote/js/remote.js to make this thing work: // First of all, connect to the server (our desktop app)var socket = io.connect(); // Search youtube when the user stops typing. This gives us an automatic search.var searchTimeout =null;$('#searchQuery').on('keyup', function(event){clearTimeout(searchTimeout);searchTimeout = setTimeout(function(){   searchYoutube($('#searchQuery').val());}, 500);}); // When we click on a video, watch it on the App$('#results').on('click', '.video', function(event){// Send an event to notify the server we want to watch this videosocket.emit('watchVideo', $(this).data());});  // When the server tells us that the player changed status (play/pause), alter the playback controlssocket.on('statusChange', function(status){if( status ==='play' ) {   $('.playback .pause').show();   $('.playback .play').hide();}elseif( status ==='pause'|| status ==='stop' ) {   $('.playback .pause').hide();   $('.playback .play').show();}});  // Notify the app when we hit the play button$('.playback .play').on('click', function(event){socket.emit('play');}); // Notify the app when we hit the pause button$('.playback .pause').on('click', function(event){socket.emit('pause');}); This is very similar to our server, except we are using socket.emit a lot more often to send commands back to our desktop app, telling it which videos to play and handle our basic play/pause controls. The only thing left to do is make the app run. Ready? Go to the terminal again and type: If you are on a Mac: sh run.sh If you are on Windows: run.bat If everything worked properly, you should be both seeing the app and if you open a web browser to http://127.0.0.1:8080 the remote client will open up. Search for a video, pick anything you like, and it’ll play in the app. This also works if you point any other device on the same network to your computer’s IP, which brings me to the next (and last) point. Finishing touches There is one small improvement we can make: print out the computer’s IP to make it easier to connect to the app from any other device on the same Wi-Fi network (like a smartphone). On js/app.js add the following code to find out the IP and update our UI so it’s the first thing we see when we open the app: // Find the local IPfunction getLocalIP(callback) {require('dns').lookup( require('os').hostname(),function (err, add, fam) {typeof callback =='function'? callback(add) :null;   });} // To make things easier, find out the machine's ip and communicate itgetLocalIP(function(ip){$('#serverInfo h1').html('Go to<br/><strong>http://'+ip+':'+serverPort+'</strong><br/>to open the remote');}); The next time you run the app, the first thing you’ll see is the IP for your computer, so you just need to type that URL in your smartphone to open the remote and control the player from any computer, tablet, or smartphone (as long as they are in the same Wi-Fi network). That's it! You can start expanding on this to improve the app: Why not open the app on a fullscreen by default? Why not get rid of the horrible default frame and create your own? You can actually designate any div as a window handle with CSS (using -webkit-app-region: drag), so you can drag the window by that div and create your own custom title bar. Summary While the app has a lot of interlocking parts, it's a good first project to find out what you can achieve with node-webkit in just a few minutes. I hope you enjoyed this post! About the author Roberto González is the co-founder of Aerolab, “an awesome place where we really push the barriers to create amazing, well-coded designs for the best digital products”. He can be reached at @robertcode.
Read more
  • 0
  • 0
  • 3098

article-image-moodle-19-exploring-design-portfolios
Packt
27 May 2010
9 min read
Save for later

Moodle 1.9: Exploring Design Portfolios

Packt
27 May 2010
9 min read
(For more resources on Moodle 1.9, see here.) Exploring the Exabis portfolio The Exabis portfolio is a third-party add-on that can be placed in your courses to allow students to store and organize their work and allow them to share it with others, for example, external verifiers. The code can be downloaded from the Modules and plugins section at the Moodle website (http://moodle.org/mod/data/view.php?d=13&rid=1142&filter=1). Once the code has been installed, the site administrator will need to check the settings of the block for all users. Site-wide settings The first job, for an administrator, is to make sure the settings meet the institution's needs. These settings are available on the administration panel. You may need your site administrator to adjust these for you if you do not have these permissions. The following screenshot shows the two options available: The settings will be determined by what version you have installed on your system, and in this case, the options relate to how the portfolio looks. The key feature of recent portfolios is the ability to create views that are customized web pages. Most students will be familiar with this activity through social networking sites. Installing the Exabis block into a course To use the Exabis block, you first need to enable editing within the course you are responsible for. To do this, you need to click on the Turn editing on button, as shown in the following screenshot: This will change the view of your course, and a block will now be visible on the right-hand column to add further blocks to your course. The Add button, as shown in the previous screenshot, is a drop-down list and will list all available blocks in alphabetical order. You need to scroll down until you find the Exabis E-Portfolio listing and then click to add this block. Once the block has been added to your course area, you can make some more localized adjustments. In the staff view, there are three options. However, the two lower options merely point to different tabs on the same menu as the MyPortfolio link. Once you open the portfolio, you can see the layout of the block and the functions that it supports, as shown in the following screenshot: The personal information tab The first tab allows students to build up some personal information so that they have a sort of limited resume or CV. Once students click on the Information tab, they will see one button (Edit), which will open an edit window to allow them to add some notes and details. The Categories tab After students have entered some basic information about themselves, they need to organize their material. This is achieved initially by establishing some categories under which the information they gather can be structured. In this example, using the Product Design course, the student may need to create categories for each section they are working with. In the UK, for example, this would be: Materials and Components, Design and Market Influence, and Process and Manufacture. By clicking on the Categories tab, there will, as with the Information tab, be an edit button visible. Clicking on this button will open a window to create the required categories, as shown in the following screenshot: By clicking on the New button, as shown in the previous screenshot, the category will be created and you will then have the choice to add sub-categories or new categories as required. The layout of this edit window is as shown in the following screenshot: These can be further broken down into sub-categories that match the course specification. The process is the same as creating categories, and with each new category created, an additional field appears to add sub-categories, as seen in the previous screenshot. The resulting structure could look similar to the following screenshot, where each part of the specification has a corresponding category and sub-category. These categories will now be available in drop-down menus for the students to add various resources, such as files and notes, as shown in the following screenshot: In the previous screenshot, you can see that students have a drop-down box under Categories, which lists categories and sub-categories for them to link their resources too. Building up the portfolio content Students can now build up their portfolio of evidence and can share this information, if they need to, with staff, other students, or external examiners. The information is organized through the main My Portfolio tab, as shown in the following screenshot: Under this tab, there are sub-tabs that allow the students to link to websites, upload files, and also make notes about some of the material they have gathered. Each of these can now be associated with a category or sub-category to give some clear definition to their research work. The following screenshot shows a student adding some files to a sub-category related to design: In the previous screenshot, students could attach a file which may be some notes they made on a factory visit that they have scanned. Gradually, they can start building up a detailed folder of information and links to other useful resources. The following screenshot shows the MyPortfolio view as a student builds up some of their reference material and notes. Each of the resources is clearly categorized and time stamped and the type of resources is easy to see. Creating views In the release under discussion here (version 3.2.3 release 168) there is a tab to create views. This is still under development and not fully functional, but may well be functional by the time you install it. Clicking on the Views tab will show a button to add a view. Clicking on the Add View button will open an edit window to allow the student to organize their views, as shown in the following screenshot: The views are quite basic at present, but will allow students to build up a portfolio of evidence in an easy and organized way. Sharing their work and thoughts If students would like to share some of their work with each other, then they can via the Views tab. This tab, on the latest version, has a link to allow sharing. Once students enable the sharing function by clicking on the Change link, they can then choose what type of sharing they require and with whom. In the case shown here, the student can elect to share his/her work externally by creating a link to his/her folder from an external or an internal link. The Internal Access option allows them to further specify who can see their portfolio. In this case, they can share it with all of the staff who teach them in the design and technology faculty, or just some of the staff. In this case, when the product design teacher logs in and checks for shared portfolios, they will see this student's work. Importing and exporting portfolios Increasingly with e-portfolios there is the need to be able to take the entire portfolio with the students to other places of study or work. With the Exabis system, there is the ability to export the student's work in a number of formats. The two formats, currently available are Shareable Content Object Reference Module (SCORM) and Extensible Markup Language (XML). Both of these are file structures used to import and export groups of files from web-based systems such as Moodle. The import facility in Exabis will import a SCORM file, which is usually in a zipped format. The options shown for Export/Import are shown in the following screenshot: In both cases shown here, the export will allow students to save their work as a ZIP file, and depending on how they have structured their portfolio, they will have a range of choices regarding what to include in the export. The following screenshot shows the options for a SCORM export. The student, as shown in the previous screenshot, has chosen to save his/her Product Development material in a SCORM file. Clicking on the Create SCORM-File button will open a download dialog window where the student can chose where on his/her computer to save the zipped file. An additional feature shown in the previous Export your portfolio screenshot is the ability to include Moodle assignments in the portfolio of evidence. This would be useful if students take the portfolio to a new job. Clicking on the Import from Moodle-Assignments link results in a screen where students can add their assignments, as shown in the following screenshot: Under the Action column shown in this screenshot, the student can click on the add this file link. Clicking this link will open the MyPortfolio:Add window and the student can link this assignment to a category. The resulting link will then appear in their MyPortfolio: Files view. The assignment itself will be a hyperlink, which will open the word-processed assignment when clicked. Opening the assignment link will create a full URL to where the assignment can be located so that external examiners or employers can also view the work. It allows additional notes to be added by the student, such as follow up comments, as shown in the following screenshot: The additional commentary shows how the student has used the portfolio to track their learning process and to reflect on their earlier work. The whole process is therefore contained in an organized structure that the student controls and can be modified as their greater understanding dictates. Future developments in Exabis As mentioned, the views in this portfolio are not yet fully developed, but the current version is very usable. In order to have more flexibility and functionality, it is necessary to install a more fully featured e-portfolio such as MyStuff, which we will investigate in the next section.
Read more
  • 0
  • 0
  • 3081
Modal Close icon
Modal Close icon