Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Server-Side Web Development

406 Articles
article-image-roles-and-permissions-moodle-administration-part1
Packt
23 Oct 2009
1 min read
Save for later

Roles and Permissions in Moodle Administration: Part1

Packt
23 Oct 2009
1 min read
Lets get started. Moodle's PreDefined Roles Moodle comes with a number of predefined roles. These standard roles are suitable for some educational setups, but most institutions require modifications to the roles' system in order to tailor Moodle to their specific needs. Each role has permissions for a number of actions that can be carried out in Moodle. For example, an administrator and a course creator are able to create new courses, whereas all other roles are denied this right. Likewise, a teacher is allowed to moderate forums, whereas students are only allowed to contribute to them. Before we can actually do anything with roles, we need to understand the concept of contexts, which is dealt with next. Contexts Contexts are the areas in Moodle where roles can be assigned to users. A role can be assigned within different contexts. A user has a role in any given context, where a context can be a course, an activity module, a user, a block, or Moodle itself. Moodle comes with the following seven contexts that you will come across a lot in this article.
Read more
  • 0
  • 0
  • 3470

article-image-adding-feedback-moodle-quiz-questions
Packt
08 Apr 2013
4 min read
Save for later

Adding Feedback to the Moodle Quiz Questions

Packt
08 Apr 2013
4 min read
(For more resources related to this topic, see here.) Getting ready Any learner taking a quiz may want to know how well he/she has answered the questions posed. Often, working with Moodle, the instructor is at a distance from the learner. Providing feedback is a great way of enhancing communication between learner and instructor. Learner feedback can be provided at multiple levels using Moodle Quiz. You can create feedback at various levels in both the questions and the overall quiz. Here we will examine feedback at the question level. General feedback When we add General Feedback to a question, every student sees the feedback, regardless of their answer to the question. This is good opportunity to provide clarification for the learner who had guessed a correct answer, as well as for the learner whose response was incorrect. Individual response feedback We can create feedback tailored to each possible response in a multiple choice question. This feedback can be more focused in nature. Often, a carefully crafted distracter in a multiple choice can reveal misconceptions and the feedback can provide the correction required as soon as the learner completes the quiz. Feedback given when the question is fresh in the learner's mind, is very effective. How to do it... Let's create some learner feedback for some of the questions that we have created in the question bank: First of all, let's add general feedback to a question. Returning to our True-False question on Texture, we can see that general feedback is effective when there are only two choices. Remember that this type of feedback will appear for all learners, regardless of the answer they submitted. The intention of this feedback is to reflect the correct solution and also give more background information to enhance the teaching opportunity. Let's take a look at how to create a specific feedback for each possible response that a learner may submit. This is done by adding individual response feedback. Returning to our multiple choice question on application of the element line, a specific feedback response tailored to each possible choice will provide helpful clarification for the student. This type of feedback is entered after each possible choice. Here is an example of a feedback to reinforce a correct response and a feedback for an incorrect response: In this case, the feedback the learner receives is tailored to the response they have submitted. This provides much more specific feedback to the learner's choice of responses. For the embedded question (Cloze), feedback is easy to add in Moodle 2.0. In the following screenshot, we can see the question that we created with feedback added: And this is what the feedback looks like to the student: How it works... We have now improved questions in our exam bank by providing feedback for the learner. We have created both general feedback that all learners will see and specific feedback for each response the learner may choose. As we think about the learning experience for the learner, we can see that immediate feedback with our questions is an effective way to reinforce learning. This is another feature that makes Moodle Quiz such a powerful tool. There's more... As we think about the type of feedback we want for the learner, we can combine feedback for individual responses with general feedback. Also there are options for feedback for any correct response, for any partially correct response, or for any incorrect response. Feedback serves to engage the learners and personalize the experience. We created question categories, organized our questions into categories, and learned how to add learner feedback at various levels inside the questions. We are now ready to configure a quiz. Summary In the article we have seen how we can add feedback to the questions of the Moodle Quiz. Resources for Article : Further resources on this subject: Integrating Moodle 2.0 with Mahara and GoogleDocs for Business [Article] What's New in Moodle 2.0 [Article] Moodle 2.0 FAQs [Article]
Read more
  • 0
  • 0
  • 3448

article-image-irc-style-chat-tcp-server-and-event-bus
Packt
27 Aug 2013
6 min read
Save for later

IRC-style chat with TCP server and event bus

Packt
27 Aug 2013
6 min read
(For more resources related to this topic, see here.) Step 1 – fresh start In a new folder called, for example, 1_PubSub_Chat, let's open our editor of choice and create here a file called pubsub_chat.js. Also, make sure that you have a terminal window open and you moved into the newly created project directory. Step 2 – creating the TCP server TCP servers are called net servers in Vert.x. Creating and using a net server is really similar to HTTP servers: Obtain the vertx bridge object to access the framework features: var vertx = require('vertx'); /* 1 */var netServer = vertx.createNetServer(); /* 2 */netServer.listen(1234); /* 3 */ Ask Vert.x to create a TCP server (called NetServer in Vert.x). Actually, start the server by telling it to listen on TCP port 1234. Let's test whether this works. This time we need another terminal to run the telnet command: $ telnet localhost 1234 The terminal should now be connected and waiting to send/receive characters. If you have "connection refused" errors, make sure the server is running. Step 3 – adding a connect handler Now, we need to place a block of code to be executed as soon as a client connects: Define a handler function. This function will be called every time a client connects to the server: var vertx = require('vertx')var server = vertx.createNetServer().connectHandler(function(socket) {// Composing a client address stringaddr = socket.remoteAddress();addr = addr.ipaddress + addr.port;socket.write('Welcome to the chat ' + addr + '!');}).listen(1234) A NetServer connect handler accepts the socket object as a parameter; this object is our gateway to reading, writing, or closing the connection to a client. Use the socket object to write a greeting to the newly connected clients. If we test this one as in step 2 (Step 2 – creating the TCP server), we see that the server now welcomes us with a message containing an identifier of the client as its origin host and origin port. Step 4 – adding a data handler We just learned how to execute a block of code at the moment in which the client connects. However now we are interested in doing something else at the time when we receive new data from a client connection. The socket object we used in the previous step for writing data back to the client, accepts a handler function too: the data handler. Let's add one: Add a data handler function to the socket object. This is going to be called every time the client sends a new string of data: var vertx = require('vertx') var server = vertx.createNetServer().connectHandler( function(socket) { // Composing a client address string addr = socket.remoteAddress(); addr = addr.ipaddress + addr.port; socket.write('Welcome to the chat ' + addr + '!'); socket.dataHandler(function(data) { var now = new Date(); now = now.getHours() + ':' + now.getMinutes() + ':' + now.getSeconds(); var msg = now + ' <' + addr + '> ' + data; socket.write(msg); }) }).listen(1234) React to the new data event by writing the same data back to the socket (plus a prefix). What we have now is a sort of an echo server, which returns back to the sender the same message with a prefix string. Step 5 – adding the event bus magic The base requirement of a chat server is that every time a client sends a message, the rest of the connected clients should receive it. We will use event bus, the messaging service provided by the framework, to send (publish) received messages to a broadcast address. Each client will subscribe to the address upon connection and receive other clients' messages from there: Add a data handler function to the socket object: var vertx = require('vertx') var server = vertx.createNetServer().connectHandler( function(socket) { // Composing a client address string addr = socket.remoteAddress(); addr = addr.ipaddress + addr.port; socket.write('Welcome to the chat ' + addr + '!'); vertx.eventBus.registerHandler('broadcast_address', function(event){ socket.write(event); }); socket.dataHandler(function(data) { var now = new Date(); now = now.getHours() + ':' + now.getMinutes() + ':' + now.getSeconds(); var msg = now + ' <' + addr + '> ' + data; vertx.eventBus.publish('broadcast_address', msg); }) }).listen(1234) As soon as a client connects, they listen to the event bus for new data to be published on the address broadcast_address. When a client sends a string of characters to the server, this data is published to the broadcast address, triggering a handler function that writes the string to all the clients' sockets. The chat server is now complete! To try it out, just open three terminals: Terminal 1: $ vertx run pubsub_chat.js Terminal 2: $ telnet localhost 1234 Terminal 3: $ telnet localhost 1234 Now, we have a server and two clients running and connected. Type something in terminal 2 or 3 and see the message being broadcast to both the other windows: $ telnet localhost 1234Trying ::1...Connected to localhost.Escape character is '^]'.Hello from terminal two!13:6:56 <0:0:0:0:0:0:0:155991> Hello from terminal two!13:7:24 <0:0:0:0:0:0:0:155992> Hi there, here's terminal three!13:7:56 <0:0:0:0:0:0:0:155992> Great weather today! Step 6 – Organizing a more complex project Since Vert.x is a polyglot platform, we can choose to write an application (or a part of it) in either of the many supported languages. The granularity of the language choice is at verticle level. It's important to give a good architecture to a non-trivial project from the beginning. Follow this list of generic principles to avoid performance bottlenecks or the need for massive refactoring in the future: Wrap synchronous libraries or legacy code inside a worker verticle (or a module). This will keep the blocking code away from the event loop threads. Divide the problem in isolated domains and write a verticle to handle each of them (for example, database persistor verticle, web server verticle, authenticator verticle, and cache manager verticle). Use a startup verticle. This will be the single entry point to the application. Its responsibilities will be to: Validate the configuration file Programmatically deploy other verticles in the correct order Decide how many instances of a verticle to create (the decision might depend on the environment: for example, the amount of available processors) Register periodic tasks Summary: In this article, we learned in a step-wise procedure how we can create an Internet Relay Chat using the TCP server, and interconnect the server with the clients using an event bus, and enable different types of communication between them. Resources for Article: Further resources on this subject: Getting Started with Zombie.js [Article] Building a Chat Application [Article] Accessing and using the RDF data in Stanbol [Article]
Read more
  • 0
  • 0
  • 3427

article-image-overview-tdd
Packt
06 Nov 2015
11 min read
Save for later

Overview of TDD

Packt
06 Nov 2015
11 min read
 In this article, by Ravi Gupta, Harmeet Singh, and Hetal Prajapati, authors of the book Test-Driven JavaScript Development explain how testing is one of the most important phases in the development of any project, and in the traditional software development model. Testing is usually executed after the code for functionality is written. Test-driven development (TDD) makes a big difference by writing tests before the actual code. You are going to learn TDD for JavaScript and see how this approach can be utilized in the projects. In this article, you are going to learn the following: Complexity of web pages Understanding TDD Benefits of TDD and common myths (For more resources related to this topic, see here.) Complexity of web pages When Tim Berners-Lee wrote the first ever web browser around 1990, it was supposed to run HTML, neither CSS nor JavaScript. Who knew that WWW will be the most powerful communication medium? Since then, there are now a number of technologies and tools which help us write the code and run it for our needs. We do a lot these days with the help of the Internet. We shop, read, learn, share, and collaborate... well, a few words are not going to suffice to explain what we do on the Internet, are they? Over the period of time, our needs have grown to a very complex level, so is the complexity of code written for websites. It's not plain HTML anymore, not some CSS style, not some basic JavaScript tweaks. That time has passed. Pick any site you visit daily, view source by opening developer tools of the browser, and look at the source code of the site. What do you see? Too much code? Too many styles? Too many scripts? The JavaScript code and CSS code is so huge to keep it in as inline, and we need to keep them in different files, sometimes even different folders to keep them organized. Now, what happens before you publish all the code live? You test it. You test each line and see if that works fine. Well, that's a programmer's job. Zero defect, that's what every organization tries to achieve. When that is in focus, testing comes into picture, more importantly, a development style, which is essentially test driven. As the title says for this article, we're going to keep our focus on test-driven JavaScript development.   Understanding Test-driven development TDD, short for Test-driven development, is a process for software development. Kent Beck, who is known for development of TDD, refers this as "Rediscovery." Kent's answer to a question on Quora can be found at https://www.quora.com/Why-does-Kent-Beck-refer-to-the-rediscovery-of-test-driven-development. "The original description of TDD was in an ancient book about programming. It said you take the input tape, manually type in the output tape you expect, then program until the actual output tape matches the expected output. After I'd written the first xUnit framework in Smalltalk I remembered reading this and tried it out. That was the origin of TDD for me. When describing TDD to older programmers, I often hear, "Of course. How else could you program?" Therefore I refer to my role as "rediscovering" TDD." If you go and try to find references to TDD, you would even get few references from 1968. It's not a new technique, though did not get so much attention yet. Recently, an interest toward TDD is growing, and as a result, there are a number of tools on the Web. For example, Jasmine, Mocha, DalekJS, JsUnit, QUnit, and Karma are among these popular tools and frameworks. More specifically, test-driven JavaScript development is getting popular these days. Test-driven development is a software development process, which enforces a developer to write test before production code. A developer writes a test, expects a behavior, and writes code to make the test pass. It is needless to mention that the test will always fail at the start. Need of testing To err is human. As a developer, it's not easy to find defects in our own code and often we think that our code is perfect. But there are always some chances that a defect is present in the code. Every organization or individual wants to deliver the best software they can. This is one major reason that every software, every piece of code is well tested before its release. Testing helps to detect and correct defects. There are a number of reasons why testing is needed. They are as follows: To check if the software is functioning as per the requirements There will not be just one device or one platform to run your software The end user will perform an action as a programmer you never expected There was a study conducted by National Institute of Standards and Technology (NIST) in 2002, which reported that software bugs cost the U.S. economy around $60 billion annually. With better testing, more than one-third of the cost could be avoided. The earlier the defect is found, the cheaper it is to fix it. A defect found post release would cost 10-100 times more to fix than if it had already been detected and fixed. The report of the study performed by NIST can be found at http://www.nist.gov/director/planning/upload/report02-3.pdf. If we draw a curve for the cost, it comes as an exponential when it comes to cost. The following figure clearly shows that the cost increases as the project matures with time. Sometimes, it's not possible to fix a defect without making changes in the architecture. In those cases, the cost, sometimes, is so much that developing the software from scratch seems like a better option. Benefits of TDD and common myths Every methodology has its own benefits and myths among people. The following sections will analyze the key benefits and most common myths of TDD. Benefits TDD has its own advantages over regular development approaches. There are a number of benefits, which help make a decision of using TDD over the traditional approach. Automated testing: If you did see a website code, you know that it's not easy to maintain and test all the scripts manually and keep them working. A tester may leave a few checks, but automated tests won't. Manual testing is error prone and slow. Lower cost of overall development: With TDD, the number of debugs is significantly decreased. You develop some code; run tests, if you fail, re-doing the development is significantly faster than debugging and fixing it. TDD aims at detecting defect and correcting them at an early stage, which costs much cheaper than detecting and correcting at a later stage or post release. Also, now debugging is very less frequent and significant amount of time is saved. With the help of tools/test runners like Karma, JSTestDriver, and so on, running every JavaScript tests on browser is not needed, which saves significant time in validation and verification while the development goes on. Increased productivity: Apart from time and financial benefits, TDD helps to increase productivity since the developer becomes more focused and tends to write quality code that passes and fulfills the requirement. Clean, maintainable, and flexible code: Since tests are written first, production code is often very neat and simple. When a new piece of code is added, all the tests can be run at once to see if anything failed with the change. Since we try to keep our tests atomic, and our methods also address a single goal, the code automatically becomes clean. At the end of the application development, there would be thousands of test cases which will guarantee that every piece of logic can be tested. The same test cases also act as documentation for users who are new to the development of system, since these tests act as an example of how the code works. Improved quality and reduced bugs: Complex codes invite bugs. Developers when change anything in neat and simple code, they tend to leave less or no bugs at all. They tend to focus on purpose and write code to fulfill the requirement. Keeps technical debt to minimum: This is one of the major benefits of TDD. Not writing unit tests and documentation is a big part, which increases technical debt for a software/project. Since TDD encourages you to write tests first, and if they are well written, they act as documentation, you keep technical debt for these to minimum. As Wikipedia says, A technical debt can be defined as tasks to be performed before a unit can be called complete. If the debt is not repaid, interest also adds up and makes it harder to make changes at a later stage. More about Technical debt can be found at https://en.wikipedia.org/wiki/Technical_debt. Myths Along with the benefits, TDD has some myths as well. Let's check few of them: Complete code coverage: TDD enforces to write tests first and developers write minimum amount of code to pass the test and almost 100% code coverage is done. But that does not guarantee that nothing is missed and the code is bug free. Code coverage tools do not cover all the paths. There can be infinite possibilities in loops. Of course it's not possible and feasible to check all the paths, but a developer is supposed to take care of major and critical paths. A developer is supposed to take care of business logic, flow, and process code most of the times. No need to test integration parts, setter-getter methods for properties, configurations, UI, and so on. Mocking and stubbing is to be used for integrations. No need of debugging the code: Though test-first development makes one think that debugging is not needed, but it's not always true. You need to know the state of the system when a test failed. That will help you to correct and write the code further. No need of QA: TDD cannot always cover everything. QA plays a very important role in testing. UI defects, integration defects are more likely to be caught by a QA. Even though developers are excellent, there are chances of errors. QA will try every kind of input and unexpected behavior that even a programmer did not cover with test cases. They will always try to crash the system with random inputs and discover defects. I can code faster without tests and can also validate for zero defect: While this may stand true for very small software and websites where code is small and writing test cases may increase overall time of development and delivery of the product. But for bigger products, it helps a lot to identify defects at a very early stage and gives a chance to correct at a very low cost. As seen in the previous screenshots of cost of fixing defects for phases and testing types, the cost of correcting a defect increases with time. Truly, whether TDD is required for a project or not, it depends on context. TDD ensures a good design and architecture: TDD encourages developers to write quality code, but it is not a replacement for good design practice and quality code. Will a team of developers be enough to ensure a stable and scalable architecture? Design should still be done by following the standard practices. You need to write all tests first: Another myth says that you need to write all tests first and then the actual production code. Actually, generally an iterative approach is used. Write some tests first, then some code, run the tests, fix the code, run the tests, write more tests, and so on. With TDD, you always test parts of software and keep developing. There are many myths, and covering all of them is not possible. The point is, TDD offers developers a better opportunity of delivering quality code. TDD helps organizations by delivering close to zero-defect products. Summary In this article, you learned about what TDD is. You learned about the benefits and myths of TDD. Resources for Article: Further resources on this subject: Understanding outside-in [article] Jenkins Continuous Integration [article] Understanding TDD [article]
Read more
  • 0
  • 0
  • 3409

article-image-importance-securing-web-services
Packt
23 Jul 2014
10 min read
Save for later

The Importance of Securing Web Services

Packt
23 Jul 2014
10 min read
(For more resources related to this topic, see here.) In the upcoming sections of this article we are going to briefly explain several concepts about the importance of securing web services. The importance of security The management of securities is one of the main aspects to consider when designing applications. No matter what, neither the functionality nor the information of organizations can be exposed to all users without any kind of restriction. Suppose the case of a human resource management application that allows you to consult wages of their employees, for example, if the company manager needs to know the salary of one of their employees, it is not something of great importance. But in the same context, imagine that one of the employees wants to know the salary of their colleagues, if access to this information is completely open, it could generate problems among employees with varied salaries. Security management options Java provides some options for security management. Right now we will explain some of them and demonstrate how to implement them. All authentication methods are practically based on credentials delivery from the client to the server. In order to perform this, there are several methods: BASIC authentication DIGEST authentication CLIENT CERT authentication Using API keys The Security Management in applications built with Java including those ones with RESTful web services, always rely on JAAS. Basic authentication by providing user credentials Possibly one of the most used techniques in all kind of applications. The user, before gaining functionality over the application is requested to enter a username and password both are validated in order to verify if credentials are correct (belongs to an application user). We are 99 percent sure you have performed this technique at least once, maybe through a customized mechanism, or if you used JEE platform, probably through JAAS. This kind of control is known as basic authentication. In order to have a working example, let’s start our application server JBoss AS 7, then go to bin directory and execute the file add-user.bat (.sh file for UNIX users). Finally, we will create a new user as follows: As a result, we will have a new user in JBOSS_HOME/standalone/configuration/application - users.properties file. JBoss is already set with a default security domain called other; the same one uses the information stored in the file we mentioned earlier in order to authenticate. Right now we are going to configure the application to use this security domain, inside the folder WEB-INF from resteasy-examples project, let's create a file named jboss-web.xml with the following content: <?xml version="1.0" encoding="UTF-8"?> <jboss-web> <security-domain>other</security-domain> </jboss-web> Alright, let's configure the file web.xml in order to aggregate the securities constraints. In the following block of code, you will see on bold what you should add: <?xml version="1.0" encoding="UTF-8"?> <web-app version="3.0" xsi_schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <!-- Roles --> <security-role> <description>Any rol </description> <role-name>*</role-name> </security-role> <!-- Resource / Role Mapping --> <security-constraint> <display-name>Area secured</display-name> <web-resource-collection> <web-resource-name>protected_resources</web-resource-name> <url-pattern>/services/*</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> </web-resource-collection> <auth-constraint> <description>User with any role</description> <role-name>*</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> </login-config> </web-app> From a terminal let's go to the home folder of the resteasy-examples project and execute mvn jboss-as:redeploy. Now we are going to test our web service as we did earlier by using SoapUI. We are going to perform request using the POST method to the URL SOAP UI shows us the HTTP 401 error; this means that the request wasn't authorized. This is because we performed the request without delivering the credentials to the server. Digest access authentication This authentication method makes use of a hash function to encrypt the password entered by the user before sending it to the server. This makes it obviously much safer than the BASIC authentication method, in which the user’s password travels in plain text that can be easily read by whoever intercepts. To overcome such drawbacks, digest MD5 authentication applies a function on the combination of the values of the username, realm of application security, and password. As a result we obtain an encrypted string that can hardly be interpreted by an intruder. Now, in order to perform what we explained before, we need to generate a password for our example user. And we have to generate it using the parameters we talked about earlier; username, realm, and password. Let’s go into the directory of JBOSS_HOME/modules/org/picketbox/main/ from a terminal and type the following: java -cp picketbox-4.0.7.Final.jar org.jboss. security.auth.callback.RFC2617Digest username MyRealmName password We will obtain the following result: RFC2617 A1 hash: 8355c2bc1aab3025c8522bd53639c168 Through this process we obtain the encrypted password, and use it in our password storage file (the JBOSS_HOME/standalone/configuration/application-users.properties). We must replace the password in the file and it will be used for the user username. We have to replace it because the old password doesn't contain the realm name information of the application. Next, We have to modify the web.xml file in the tag auth-method and change the value FORM to DIGEST, and we should set the application realm name this way: <login-config> <auth-method>DIGEST</auth-method> <realm-name>MyRealmName</realm-name> </login-config> Now, let's create a new security domain in JBoss, so we can manage the authentication mechanism DIGEST. In the file JBOSS_HOME/standalone/configuration/standalone.xml, on the section <security-domains>, let's add the following entry: <security-domain name="domainDigest" cache-type ="default"> <authentication> <login-module code="UsersRoles" flag="required"> <module-option name="usersProperties" value="${jboss.server.config.dir} /application-users.properties"/> <module-option name="rolesProperties" value="${jboss.server.config.dir}/ application-roles.properties"/> <module-option name="hashAlgorithm" value="MD5"/> <module-option name= "hashEncoding" value="RFC2617"/> <module-option name="hashUserPassword" value="false"/> <module-option name="hashStorePassword" value="true"/> <module-option name="passwordIsA1Hash" value="true"/> <module-option name="storeDigestCallback" value=" org.jboss.security.auth.callback.RFC2617Digest"/> </login-module> </authentication> </security-domain> Finally, in the application, change the security domain name in the file jboss-web.xml as shown in the following snippet: <?xml version="1.0" encoding="UTF-8"?> <jboss-web> <security-domain>java:/jaas/domainDigest</security-domain> </jboss-web> We are going to change the authentication method from BASIC to DIGEST in the web.xml file. Also we enter the name of the security realm; all these changes must be applied in the tag login-config, this way: <login-config> <auth-method>DIGEST</auth-method> <realm-name>MyRealmName</realm-name </login-config> Now, restart the application server and then redeploy the application on JBoss. To do this, execute the next command on the terminal: mvn jboss-as:redeploy Authentication through certificates It is a mechanism in which a trust agreement is established between the server and the client through certificates. They must be signed by an agency established to ensure that the certificate presented for authentication is legitimate, it is known as CA. This security mechanism needs that our application server uses HTTPS as communication protocol. So we must enable HTTPS. Let's add a connector in the standalone.xml file; look for the following line: <connector name="http" Add the following block of code: <connector name="https" protocol="HTTP/1.1" scheme="https" socket-binding="https" secure="true"> <ssl password="changeit" certificate-key-file="${jboss.server.config.dir}/server.keystore" verify-client="want" ca-certificate-file="${jboss.server.config.dir}/server.truststore"/> </connector> Next we add the security domain: <security-domain name="RequireCertificateDomain"> <authentication> <login-module code="CertificateRoles" flag="required"> <module-option name="securityDomain" value="RequireCertificateDomain"/> <module-option name="verifier" value=" org.jboss.security.auth.certs.AnyCertVerifier"/> <module-option name="usersProperties" value= "${jboss.server.config.dir}/my-users.properties"/> <module-option name="rolesProperties" value= "${jboss.server.config.dir}/my-roles.properties"/> </login-module> </authentication> <jsse keystore-password="changeit" keystore-url= "file:${jboss.server.config.dir}/server.keystore" truststore-password="changeit" truststore-url ="file:${jboss.server.config.dir}/server.truststore"/> </security-domain> As you can see, we need two files: my-users.properties and my-roles.properties, both are empty and located in the JBOSS_HOME/standalone/configuration path. We are going to add the <user-data-constraint> tag in the web.xml in this way: <security-constraint> ...<user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> Then, change the authentication method to CLIENT-CERT: <login-config> <auth-method>CLIENT-CERT</auth-method> </login-config> And finally change the security domain in the jboss-web.xml file in the following way: <?xml version="1.0" encoding="UTF-8"?> <jboss-web> <security-domain>RequireCertificateDomain</security-domain> </jboss-web> Now, restart the application server, and redeploy the application with Maven: mvn jboss-as:redeploy API keys With the advent of cloud computing, it is not difficult to think of applications that integrate with many others available in the cloud. Right now, it's easy to see how applications interact with Flickr, Facebook, Twitter, Tumblr, and so on through APIKeys usage. This authentication method is used primarily when we need to authenticate from another application but we do not want to access the private user data hosted in another application, on the contrary, if you want to access this information, you must use OAuth. Today it is very easy to get an API key. Simply log into one of the many cloud providers and obtain credentials, consisting of a KEY and a SECRET, the same that are needed to interact with the authenticating service providers. Keep in mind that when creating an API Key, accept the terms of the supplier, which clearly states what we can and cannot do, protecting against abusive users trying to affect their services. The following chart shows how this authentication mechanism works: Summary In this article, we went through some models of authentication. We can apply them to any web service functionality we created. As you realize, it is important to choose the correct security management, otherwise information is exposed and can easily be intercepted and used by third parties. Therefore, tread carefully. Resources for Article: Further resources on this subject: RESTful Java Web Services Design [Article] Debugging REST Web Services [Article] RESTful Services JAX-RS 2.0 [Article]
Read more
  • 0
  • 0
  • 3403

article-image-moodle-online-communities
Packt
14 Apr 2014
9 min read
Save for later

Moodle for Online Communities

Packt
14 Apr 2014
9 min read
(For more resources related to this topic, see here.) Now that you're familiar with the ways to use Moodle for different types of courses, it is time to take a look at how groups of people can come together as an online community and use Moodle to achieve their goals. For example, individuals who have the same interests and want to discuss and share information in order to transfer knowledge can do so very easily in a Moodle course that has been set up for that purpose. There are many practical uses of Moodle for online communities. For example, members of an association or employees of a company can come together to achieve a goal and finish a task. In this case, Moodle provides a perfect place to interact, collaborate, and create a final project or achieve a task. Online communities can also be focused on learning and achievement, and Moodle can be a perfect vehicle for encouraging online communities to support each other to learn, take assessments, and display their certificates and badges. Moodle is also a good platform for a Massive Open Online Course (MOOC). In this article, we'll create flexible Moodle courses that are ideal for online communities and that can be modified easily to create opportunities to harness the power of individuals in many different locations to teach and learn new knowledge and skills. In this article, we'll show you the benefit of Moodle and how to use Moodle for the following online communities and purposes: Knowledge-transfer-focused communities Task-focused communities Communities focused on learning and achievement Moodle and online communities It is often easy to think of Moodle as a learning management system that is used primarily by organizations for their students or employees. The community tends to be well defined as it usually consists of students pursuing a common end, employees of a company, or members of an association or society. However, there are many informal groups and communities that come together because they share interests, the desire to gain knowledge and skills, the need to work together to accomplish tasks, and let people know that they've reached milestones and acquired marketable abilities. For example, an online community may form around the topic of climate change. The group, which may use social media to communicate with each other, would like to share information and get in touch with like-minded individuals. While it's true that they can connect via Facebook, Twitter, and other social media formats, they may lack a platform that gives a "one-stop shopping" solution. Moodle makes it easy to share documents, videos, maps, graphics, audio files, and presentations. It also allows the users to interact with each other via discussion forums. Because we can use but not control social networks, it's important to be mindful of security issues. For that reason, Moodle administrators may wish to consider ways to back up or duplicate key posts or insights within the Moodle installation that can be preserved and stored. In another example, individuals may come together to accomplish a specific task. For example, a group of volunteers may come together to organize a 5K run fundraiser for epilepsy awareness. For such a case, Moodle has an array of activities and resources that can make it possible to collaborate in the planning and publicity of the event and even in the creation of post event summary reports and press releases. Finally, let's consider a person who may wish to ensure that potential employers know the kinds of skills they possess. They can display the certificates they've earned by completing online courses as well as their badges, digital certificates, mentions in high achievers lists, and other gamified evidence of achievement. There are also the MOOCs, which bring together instructional materials, guided group discussions, and automated assessments. With its features and flexibility, Moodle is a perfect platform for MOOCs. Building a knowledge-based online community For our knowledge-based online community, let's consider a group of individuals who would like to know more about climate change and its impact. To build a knowledge-based online community, the following are the steps we need to perform: Choose a mobile-friendly theme. Customize the appearance of your site. Select resources and activities. Moodle makes it possible for people from all locations and affiliations to come together and share information in order to achieve a common objective. We will see how to do this in the following sections. Choosing the best theme for your knowledge-based Moodle online communities As many of the users in the community access Moodle using smartphones, tablets, laptops, and desktops, it is a good idea to select a theme that is responsive, which means that it will be automatically formatted in order to display properly on all devices. You can learn more about themes for Moodle, review them, find out about the developers, read comments, and then download them at https://moodle.org/plugins/browse.php?list=category&id=3. There are many good responsive themes, such as the popular Buckle theme and the Clean theme, that also allow you to customize them. These are the core and contributed themes, which is to say that they were created by developers and are either part of the Moodle installation or available for free download. If you have Moodle 2.5 or a later version installed, your installation of Moodle includes many responsive themes. If it does not, you will need to download and install a theme. In order to select an installed theme, perform the following steps: In the Site administration menu, click on the Appearance menu. Click on Themes. Click on Theme selector. Click on the Change theme button. Review all the themes. Click on the Use theme button next to the theme you want to choose and then click on Continue. Using the best settings for knowledge-based Moodle online communities There are a number of things you can do to customize the appearance of your site so that it is very functional for knowledge-transfer-based Moodle online communities. The following is a brief checklist of items: Select Topics format under the Course format section in the Course default settings window. By selecting topics, you'll be able to organize your content around subjects. Use the General section, which is included as the first topic in all courses. It has the News forum link. You can use this for announcements highlighting resources shared by the community. Include the name of the main contact along with his/her photograph and a brief biographical sketch in News forum. You'll create the sense that there is a real "go-to" person who is helping guide the endeavor. Incorporate social media to encourage sharing and dissemination of new information. Brief updates are very effective, so you may consider including a Twitter feed by adding your Twitter account as one of your social media sites. Even though your main topic of discussion may contain hundreds of subtopics that are of great interest, when you create your Moodle course, it's best to limit the number of subtopics to four or five. If you have too many choices, your users will be too scattered and will not have a chance to connect with each other. Think of your Moodle site as a meeting point. Do you want to have too many breakout sessions and rooms or do you want to have a main networking site? Think of how you would like to encourage users to mingle and interact. Selecting resources and activities for a knowledge-based Moodle online community The following are the items to include if you want to configure Moodle such that it is ideal for individuals who have come together to gain knowledge on a specific topic or problem: Resources: Be sure to include multiple types of files: documents, videos, audio files, and presentations. Activities: Include Quiz and other such activities that allow individuals to test their knowledge. Communication-focused activities: Set up a discussion forum to enable community members to post their thoughts and respond to each other. The key to creating an effective Moodle course for knowledge-transfer-based communities is to give the individual members a chance to post critical and useful information, no matter what the format or the size, and to accommodate social networks. Building a task-based online community Let's consider a group of individuals who are getting together to plan a fundraising event. They need to plan activities, develop materials, and prepare a final report. Moodle can make it fairly easy for people to work together to plan events, collaborate on the development of materials, and share information for a final report. Choosing the best theme for your task-based Moodle online communities If you're using volunteers or people who are using Moodle just for the tasks or completion of tasks, you may have quite a few Moodle "newbies". Since people will be unfamiliar with navigating Moodle and finding the places they need to go, you'll need a theme that is clear, attention-grabbing, and that includes easy-to-follow directions. There are a few themes that are ideal for collaborations and multiple functional groups. We highly recommend the Formal white theme because it is highly customizable from the Theme settings page. You can easily customize the background, text colors, logos, font size, font weight, block size, and more, enabling you to create a clear, friendly, and brand-recognizable site. Formal white is a standard theme, kept up to date, and can be used on many versions of Moodle. You can learn more about the Formal white theme and download it by visiting http://hub.packtpub.com/wp-content/uploads/2014/04/Filetheme_formalwhite.png. In order to customize the appearance of your entire site, perform the following steps: In the Site administration menu, click on Appearance. Click on Themes. Click on Theme settings. Review all the themes settings. Enter the custom information in each box.
Read more
  • 0
  • 0
  • 3392
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-moodle-19-theme-design-customizing-header-and-footer-part-2
Packt
23 Apr 2010
6 min read
Save for later

Moodle 1.9 Theme Design: Customizing the Header and Footer (Part 2)

Packt
23 Apr 2010
6 min read
Customizing the footer Obviously, the second thing that we are going to do after we have made changes to our Moodle header file is to carry on and change the footer.html file. The following tasks will be slightly easier than changing the header logo and title text within our Moodle site, as there is much less code and subsequently much less to change. Removing the Moodle logo The first thing that we will notice about the footer in Moodle is that it has the Moodle logo on the front page of your Moodle site and a Home button on all other pages. In addition to this, there is the login info text that shows who is logged in and a link to log out. More often than not Moodle themers will want to remove the Moodle logo so that they can give their Moodle site its own branding. So let's get stuck in with the next exercise, but don't forget that this logo credits the Moodle community. Time for action – deleting the Moodle logo Navigate to your mytheme directory and right-click on the footer.html file and choose Open With | WordPad. Find the following two lines of code: echo $loggedinas;echo $homelink; Comment out the second line using a PHP comment: echo $loggedinas;/*echo $homelink; */ Save the footer.html file and refresh your browser window. You should now see the footer without the Moodle logo. What just happened? In this exercise, we learned which parts of the PHP code in the footer.html file control where the Moodle logo appears in the Moodle footer. We also learned how to comment out the PHP code that controls the rendering of the Moodle logo so that it does not appear. You could try to put the Moodle logo back if you want. Removing the login info text and link Now that we have removed the Moodle logo, which of course is completely up to you, you might also want to remove the login info link. This link is used exactly like the one in the top right-hand corner of your Moodle site, insofar as it acts as a place where you can log in and log out and provide details of who you logged in as. The only thing to consider here is that if you decide to remove the login info link from the header.html file and also remove it from the footer, you will have no easy way of logging in or out of Moodle. So it is always wise to leave it either in the header or the footer. You might also consider the advantages of having this here as some Moodle pages such as large courses are very long. So, once the user has scrolled way down the page, he/she has a place to log out if needed. The following task is very simple and will require you to go through similar steps as the"deleting the logo" exercise. The only difference is that you will comment out a different line of code. Time for action – deleting the login info text Navigate to your mytheme directory and right-click on the footer.html file and choose Open With | WordPad (or an editor of your choice). Find the following two lines of code: echo $loggedinas;echo $homelink; Comment out the first line by using a PHP comment as shown below: /* echo $loggedinas; */ echo $homelink; Save the footer.html file and refresh your browser window. You will see the footer without the Moodle logo or the login info link. What just happened? In this task, we learned about those parts of the PHP code in the footer.html that control whether the Moodle login info text appears in the Moodle footer similar to the Moodle logo in the previous exercise. We also learned how to comment out the code that controls the rendering of the login info text so that it does not appear. Have a go hero – adding your own copyright or footer text The next thing that we are going to do in this article is to add some custom footer text where the Moodle logo and the login info text were before we removed them. It's completely up to you what to add in the next exercises. If you would like to just add some text to the footer then please do. However, as part of the following tasks we are going to add some copyright text and format it using some very basic HTML. Time for action – adding your own footer text Navigate to your mytheme directory and right-click on the footer.html file and choose Open With | WordPad. At the very top of the file, paste the following text or choose your own footer text to include: My School © 2009/10 All rights reserved. Save the footer.html and refresh your browser. You will see that your footer text is at the bottom of the page on the right-hand side. However, this text is aligned to the left as all text in a browser would be. Open the footer.html file again (if it isn't open already) and wrap the following code around the footer text that you have just added: <div align="right">My School &copy; 2009/10 All rights reserved</div> Save your footer.html file and refresh your browser. You will see that the text is now aligned to the right. What just happened? We just added some very basic footer text to our footer.html file, saved it, and viewed it in our web browser. We have demonstrated here that it is very easy to add our own text to the footer.html file. We have also added some basic HTML formatting to move the text from the left to the right-hand side of the footer. There are other ways to do so, which involve the use of CSS. For instance, we could have given the <div> tag a CSS class and used a CSS selector to align the text to the right. Have a go hero – adding your own footer logo Now try to see if you can edit the footer.html and add the same logo as you have in the header.html in to the footer. Remember that you can put the logo code anywhere outside of a PHP code block. So try to copy the header logo code and paste it into the footer.html. Finally, based on what we have learned, try to align the logo to the right as we did with the footer text.
Read more
  • 0
  • 0
  • 3390

article-image-overview-web-services-sakai
Packt
06 Jul 2011
16 min read
Save for later

An overview of web services in Sakai

Packt
06 Jul 2011
16 min read
Connecting to Sakai is straightforward, and simple tasks, such as automatic course creation, take only a few lines of programming effort. There are significant advantages to having web services in the enterprise. If a developer writes an application that calls a number of web services, then the application does not need to know the hidden details behind the services. It just needs to agree on what data to send. This loosely couples the application to the services. Later, if you can replace one web service with another, programmers do not need to change the code on the application side. SOAP works well with most organizations' firewalls, as SOAP uses the same protocol as web browsers. System administrators have a tendency to protect an organization's network by closing unused ports to the outside world. This means that most of the time there is no extra network configuration effort required to enable web services. Another simplifying factor is that a programmer does not need to know the details of SOAP or REST, as there are libraries and frameworks that hide the underlying magic. For the Sakai implementation of SOAP, to add a new service is as simple as writing a small amount of Java code within a text file, which is then compiled automatically and run the first time the service is called. This is great for rapid application development and deployment, as the system administrator does not need to restart Sakai for each change. Just as importantly, the Sakai services use the well-known libraries from the Apache Axis project. SOAP is an XML message passing protocol that, in the case of Sakai sites, sits on top of the Hyper Text Transfer Protocol (HTTP). HTTP is the protocol used by web browsers to obtain web pages from a server. The client sends messages in XML format to a service, including the information that the service needs. Then the service returns a message with the results or an error message. The architects introduced SOAP-based web services first to Sakai , adding RESTful services later. Unlike SOAP, instead of sending XML via HTTP posts to one URL that points to a service, REST sends to a URL that includes information about the entity, such as a user, with which the client wishes to interact. For example, a REST URL for viewing an address book item could look similar to http://host/direct/addressbook_item/15. Applying URLs in this way makes for understandable, human-readable address spaces. This more intuitive approach simplifies coding. Further, SOAP XML passing requires that the client and the server parse the XML and at times, the parsing effort is expensive in CPU cycles and response times. The Entity Broker is an internal service that makes life easier for programmers and helps them manipulate entities. Entities in Sakai are managed pieces of data such as representations of courses, users, grade books, and so on. In the newer versions of Sakai, the Entity Broker has the power to expose entities as RESTful services. In contrast, for SOAP services, if you wanted a new service, you would need to write it yourself. Over time, the Entity Broker exposes more and more entities RESTfully, delivering more hooks free to integrate with other enterprise systems. Both SOAP and REST services sit on top of the HTTP protocol. Protocols This section explains how web browsers talk to servers in order to gather web pages. It explains how to use the telnet command and a visual tool called TCPMON (http://ws.apache.org/commons/tcpmon/tcpmontutorial.html) to gain insight into how web services and Web 2.0 technologies work. Playing with Telnet It turns out that message passing occurs via text commands between the browser and the server. Web browsers use HTTP to get web pages and the embedded content from the server and to send form information to the server. HTTP talks between the client and server via text (7-bit ASCII) commands. When humans talk with each other, they have a wide vocabulary. However, HTTP uses fewer than twenty words. You can directly experiment with HTTP using a Telnet client to send your commands to a web server. For example, if your demonstration Sakai instance is running on port 8080, the following command will get you the login page: telnet localhost 8080 GET /portal/login The GET command does what it sounds like and gets a web page. Forms can use the GET verb to send data at the end of the URL. For example, GET /portal/login?name=alan&age=15 is sending the variables name=alan and age=15 to the server. Installing TCPMON You can use the TCPMON tool to view requests and responses from a web browser such as Firefox. One of TCPMON's abilities is that it can act as an invisible man in the middle, recording the messages between the web browser and the server. Once set up, the requests sent from the browser go to TCPMON and it passes the request on to the server. The server passes back a response and then TCPMON, a transparent proxy, returns the response to the web browser. This allows us to look at all requests and responses graphically. First, you can set up TCPMON to listenon a given port number—by convention, normally port 8888—and then you can configure your web browser to send its requests through the proxy. Then, you can type the address of a given page into the web browser, but instead of going directly to the relevant server, the browser sends the request to the proxy, which then passes it on and passes the response back. TCPMON displays both the request and the responses in a window. You can download TCPMON here. After downloading and unpacking, you can—from within the build directory—run either tcpmon.bat for the Windows environment or tcpmon.sh for the UNIX/Linux environment. To configure a proxy, you can click on the Admin tab and then set the Listen Port to 8888 and select the Proxy radio button. After that, clicking on Add will create a new tab, where the requests and responses will be displayed later. Your favorite web browser now has to recognize the newly-setup proxy. For Firefox 3, you can do this by selecting the menu option Edit/Preferences, and then choosing the Advanced tab and the Network tab, as shown in the next screenshot. You will need to set the proxy options, HTTP proxy to 127.0.0.1, and the port number to 8888. If you do this, you will need to ensure that the No proxies text input is blank. Clicking on the OK button enables the new settings. (Move the mouse over the image to enlarge.) To use the Proxy from within Internet Explorer 7 for a Local Area Network (LAN), you can edit the dialog box found under Tools | Internet Options | Connections | LAN settings. Once the proxy is working, typing http://localhost:8080/portal/login in the address bar will seamlessly return the login page of your local Sakai instance. Otherwise, you will see an error message similar to Proxy Server Refused Connection for Firefox or Internet Explorer cannot display the webpage. To turn off the proxy settings, simply select the No Proxies radio box and click on OK for Firefox 3, and for Internet Explorer 7, unselect the Use a proxy server for the LAN tick box and click on OK Requests and returned status codes When TCPMON is running a proxy on port 8888, it allows you to view the requests from the browser and the response in an extra tab, as shown in the following screenshot. Notice the extra information that the browser sends as part of the request. HTTP/1.1 defines the protocol and version level and the lines below GET are the header variables. The User-Agent defines which client sends the request. The Accept headers tell the server what the capabilities of the browser are, and the Cookie header defines the value stored in a cookie. HTTP is stateless, in principle; each response is based only on the current request. However, to get around this, persistent information can be stored in cookies. Web browsers normally store their representation of a cookie as a little text file or in a small database on the end users' computers. Sakai uses the supporting features of a servlet container, such as Tomcat, to maintain state in cookies. A cookie stores a session ID, and when the server sees the session ID, it can look up the request's server-side state. This state contains information such as whether the user is logged in, or what he or she has ordered. The web browser deletes the local representation of the cookie each time the browser closes. A cookie that is deleted when a web browser closes is known as a session cookie. The server response starts with the protocol followed by a status number. HTTP/1.1 200 OK tells the web browser that the server is using HTTP version 1.1 and was able to return the requested web page successfully. 2xx status codes imply success. 3xx status codes imply some form of redirection and tell the web browser where to try to pick up the requested resource. 4xx status codes are for client errors, such as malformed requests or lack of permission to obtain the resource. 4xx states are fertile grounds for security managers to look in log files for attempted hacking. 5xx status codes mostly have to do with a failure of the server itself and are mostly of interest to system administrators and programmers during the debugging cycle. In most cases, 5xx status numbers are about either high server load or a broken piece of code. Sakai is changing rapidly and even with the most vigorous testing, there are bound to be the occasional hiccups. You will find accurate details of the full range of status codes at: http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html. Another important part of the response is Content-Type, which tells the web browser which type of material the response is returning, so the browser knows how to handle it. For example, the web browser may want to run a plug-in for video types and display text natively. Content-Length in characters is normally also given. After the header information is finished, there is a newline followed by the content itself. Web browsers interpret any redirects that are returned by sending extra requests. Web browsers also interpret any HTML pages and make multiple requests for resources such as JavaScript files and images. Modern browsers do not wait until the server returns all the requests, but render the HTML page live as the server returns the parts. The GET verb is not very efficient for posting a large amount of data, as the URL has a length limit of around 2000 characters. Further, the end user can see the form data, and the browser may encode entities such as spaces to make the URL unreadable. There is also a security aspect: if you are typing passwords in forms using GET, others may see your password or other details. This is not a good idea, especially at Internet Cafés where the next user who logs on can see the password in the browsing history. The POST verb is a better choice. Let us take as an example the Sakai demonstration login page (http://localhost:8080/portal/login). The login page itself contains a form tag that points to the relogin page with the POST method. <form method="post" action="http://localhost:8080/portal/relogin" enctype="application/x-www-form-urlencoded"> Note that the HTML tag also defines the content type. Key features of the POST request compared to GET are: The form values are stored as content after the header values There is a newline between the end of the header and the data The request mentions data and the amount of data by the use of the Content-Length header value The essential POST values for a login form with user admin (eid=admin) and password admin (pw=admin) will look like: POST http://localhost:8080/portal/relogin HTTP/1.1 Content-Type: application/x-www-form-urlencoded Content-Length: 31 eid=admin&pw=admin&submit=Login POST requests can contain much more information than GET requests, and the requests hide the values from the address bar of the web browser. This is not secure. The header is just as visible as the URL, so POST values are also neither hidden nor secure. The only viable solution is for your web browser to encrypt your transactions using SSL/TLS (http://www.ietf.org/rfc/rfc2246.txt) for security, and this occurs every time you connect to a server using an HTTPS URL. SOAP Sakai uses the Apache Axis framework, which the developers have configured to accept SOAP calls via POST. SOAP sends messages in a specific XML format with the Content-Type, otherwise known as MIME type, application/soap+xml. A programmer does not need to know more than that, as the client libraries take care of the majority of the excruciating low-level details. An example SOAP message generated by the Perl module, SOAP::Lite (http://www.soaplite.com/), for creating a login session in Sakai will look like the following POST data: <?xml version="1.0" encoding="UTF-8"?> <soap:Envelope soap_encodingStyle= "http://schemas.xmlsoap.org/soap/encoding/" > <c-gensym3 xsi_type="xsd:string">admin</c-gensym3> <c-gensym5 xsi_type="xsd:string">admin</c-gensym5> </login> </soap:Body> </soap:Envelope> There is an envelope with a body containing data for the service to consume. The important point to remember is that both the client and the server have to be able to parse the specific XML schema. SOAP messages can include extra security features, but Sakai does not require these. The architects expect organizations to encrypt web services using SSL/TSL. The last extra SOAP-related complexity is the Web Service Description Language (http://www.w3.org/TR/wsdl). Web services may change location or exist in multiple locations for redundancy. The service writer can define the location of the services and the data types involved with those services in another file, in XML format. JSON Also worth mentioning is JavaScript Object Notation (JSON), which is another popular format, passed using HTTP. When web developers realized that they could force browsers to load parts of a web page in at a time, it significantly improved the quality of the web browsing experience for the end user. This asynchronous loading enables all kinds of whiz-bang features, such as when you type in a search term and can choose from a set of search term completions before pressing on the Submit button. Asynchronous loading delivers more responsive and richer web pages that feel more like traditional desktop applications than a plain old web page. JSON is one of the formats of choice for passing asynchronous requests and responses. The asynchronous communication normally occurs through HTTP GET or POST, but with a specific content structure that is designed to be human readable and script language parser-friendly. JSON calls have the file extension .json as part of the URL. As mentioned in RFC 4627, an example image object communicated in JSON looks like: { "Image": { "Width": 800, "Height": 600, "Title": "View from 15th Floor", "Thumbnail": { "Url": "http://www.example.com/image/481989943", "Height": 125, "Width": "100" }, "IDs": [116, 943, 234, 38793] } } By confusing the boundaries between client and server, a lot of the presentation and business logic is locked on the client side in scripting languages such as JavaScript. The scripting language orchestrates the loading of parts of pages and the generation of widget sets. Frameworks such as jQuery (http://jquery.com/) and MyFaces (http://myfaces.apache.org/) significantly ease the client-side programming burden. REST To understand REST, you need to understand the other verbs in HTTP (http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html). The full HTTP set is OPTIONS, GET, HEAD, POST, PUT, DELETE, and TRACE. The HEAD verb returns from the server only the headers of the response without the content, and is useful for clients that want to see if the content has changed since the last request. PUT requests that the content in the request be stored at a particular location mentioned in the request. DELETE is for deleting the entity. REST uses the URL of the request to route to the resource, and the HTTP verb GET is used to get a resource, PUT to update, DELETE to delete, and POST to add a new resource. In general, POST request is for creating an item, PUT for updating an item, DELETE for deleting an item, and GET for returning information on the item. In SOAP, you are pointing directly towards the service the client calls or indirectly via the web service description. However, in REST, part of the URL describes the resource or resources you wish to work with. For example, a hypothetical address book application that lists all e-mail addresses in HTML format would look similar to the following: GET /email To list the addresses in XML format or JSON format: GET /email.xml GET /email.json To get the first e-mail address in the list: GET /email/1 To create a new e-mail address, of course remembering to add the rest of e-mail details to the end of the GET: POST /email In addition, to delete address 5 from the list use the following command: DELETE /email/5 To obtain address 5 in other formats such as JSON or XML, then use file extensions at the end of the URL, for example: GET /email/5.json GET /email/5.xml RESTful services are intuitively more descriptive than SOAP services, and they enable easy switching of the format from HTML to JSON to fuel the dynamic and asynchronous loading of websites. Due to the direct use of HTTP verbs by REST, this methodology also fits well with the most common application type: CRUD (Create, Read, Update, and Delete) applications, such as the site or user tools within Sakai. Now that we have discussed the theory, in the next section we shall discuss which Sakai-related SOAP services already exist.
Read more
  • 0
  • 0
  • 3366

article-image-developing-web-service-cxf
Packt
07 Jan 2010
9 min read
Save for later

Developing a Web Service with CXF

Packt
07 Jan 2010
9 min read
In this article we will basically study the sample Order Processing Application and discuss the following points: Developing a service Developing a client The Order Processing Application The objective of the Order Processing Application is to process a customer order. The order process functionality will generate the customer order, thereby making the order valid and approved. A typical scenario will be a customer making an order request to buy a particular item. The purchase department will receive the order request from the customer and prepare a formal purchase order. The purchase order will hold the details of the customer, the name of the item to be purchased, the quantity, and the price. Once the order is prepared, it will be sent to the Order Processing department for the necessary approval. If the order is valid and approved, then the department will generate the unique order ID and send it back to the Purchase department. The Purchase department will communicate the order ID back to the customer. For simplicity, we will look at the following use cases: Prepare an order Process the order The client application will prepare an order and send it to the server application through a business method call. The server application will contain a web service that will process the order and generate a unique order ID. The generation of the unique order ID will signify order approval. In real world applications a unique order ID is always accompanied by the date the order was approved. However, in this example we chose to keep it simple by only generating order ID. Developing a service Let's look specifically at how to create an Order Processing Web Service and then register it as a Spring bean using a JAX-WS frontend. The Sun-based JAX-WS specification can be found at the following URL: http://jcp.org/aboutJava/communityprocess/final/jsr224/index.html JAX-WS frontend offers two ways of developing a web service—Code-first and Contract-first. We will use the Code-first approach, that is, we will first create a Java class and convert this into a web service component. The first set of tasks will be to create server-side components. In web service terminology, Code-first is termed as the Bottoms Up approach, and Contract-first is referred to as the Top Down approach. To achieve this, we typically perform the following steps: Create a Service Endpoint Interface (SEI) and define a business method to be used with the web service. Create the implementation class and annotate it as a web service. Create beans.xml and define the service class as a Spring bean using a JAX-WS frontend. Creating a Service Endpoint Interface (SEI) Let's first create the SEI for our Order Processing Application. We will name our SEI OrderProcess. The following code illustrates the OrderProcess SEI: package demo.order; import javax.jws.WebService; @WebService public interface OrderProcess { @WebMethod String processOrder(Order order); } As you can see from the preceding code, we created a Service Endpoint Interface named OrderProcess. The SEI is just like any other Java interface. It defines an abstract business method processOrder. The method takes an Order bean as a parameter and returns an order ID String value. The goal of the processOrder method is to process the order placed by the customer and return the unique order ID. One significant thing to observe is the @WebService annotation. The annotation is placed right above the interface definition. It signifies that this interface is not an ordinary interface but a web service interface. This interface is known as Service Endpoint Interface and will have a business method exposed as a service method to be invoked by the client. The @WebService annotation is part of the JAX-WS annotation library. JAX-WS provides a library of annotations to turn Plain Old Java classes into web services and specifies detailed mapping from a service defined in WSDL to the Java classes that will implement that service. The javax.jws.WebService annotation also comes with attributes that completely define a web service. For the moment we will ignore these attributes and proceed with our development. The javax.jws.@WebMethod annotation is optional and is used for customizing the web service operation. The @WebMethod annotation provides the operation name and the action elements which are used to customize the name attribute of the operation and the SOAP action element in the WSDL document. The following code shows the Order class: package demo.order;import javax.xml.bind.annotation.XmlRootElement;@XmlRootElement(name = "Order")public class Order { private String customerID; private String itemID; private int qty; private double price; // Contructor public Order() { } public String getCustomerID() { return customerID; } public void setCustomerID(String customerID) { this.customerID = customerID; } public String getItemID() { return itemID; } public void setItemID(String itemID) { this.itemID = itemID; } public int getQty() { return qty; } public void setQty(int qty) { this.qty = qty; } public double getPrice() { return price; } public void setPrice(double price) { this.price = price; }} As you can see, we have added an @XmlRootElement annotation to the Order class. The @XmlRootElement is part of the Java Architecture for XML Binding (JAXB) annotation library. JAXB provides data binding capabilities by providing a convenient way to map XML schema to a representation in Java code. The JAXB shields the conversion of XML schema messages in SOAP messages to Java code without having the developers know about XML and SOAP parsing. CXF uses JAXB as the default data binding component. The @XmlRootElement annotations associated with Order class map the Order class to the XML root element. The attributes contained within the Order object by default are mapped to @XmlElement. The @XmlElement annotations are used to define elements within the XML. The @XmlRootElement and @XmlElement annotations allow you to customize the namespace and name of the XML element. If no customizations are provided, then the JAXB runtime by default would use the same name of attribute for the XML element. CXF handles this mapping of Java objects to XML. Developing a service implementation class We will now develop the implementation class that will realize our OrderProcess SEI. We will name this implementation class OrderProcessImpl. The following code illustrates the service implementation class OrderProcessImpl: @WebServicepublic class OrderProcessImpl implements OrderProcess { public String processOrder(Order order) { String orderID = validate(order); return orderID; }/** * Validates the order and returns the order ID**/ private String validate(Order order) { String custID = order.getCustomerID(); String itemID = order.getItemID(); int qty = order.getQty(); double price = order.getPrice(); if (custID != null && itemID != null && !custID.equals("") && !itemID.equals("") && qty > 0 && price > 0.0) { return "ORD1234"; } return null; }} As we can see from the preceding code, our implementation class OrderProcessImpl is pretty straightforward. It also has @WebService annotation defined above the class declaration. The class OrderProcessImpl implements OrderProcess SEI. The class implements the processOrder method. The processOrder method checks for the validity of the order by invoking the validate method. The validate method checks whether the Order bean has all the relevant properties valid and not null. It is recommended that developers explicitly implement OrderProcess SEI, though it may not be necessary. This can minimize coding errors by ensuring that the methods are implemented as defined. Next we will look at how to publish the OrderProcess JAX-WS web service using Spring configuration.   Spring-based server bean What makes CXF the obvious choice as a web service framework is its use of Spring-based configuration files to publish web service endpoints. It is the use of such configuration files that makes the development of web service convenient and easy with CXF. Spring provides a lightweight container which works on the concept of Inversion of Control (IoC) or Dependency Injection (DI) architecture; it does so through the implementation of a configuration file that defines Java beans and its dependencies. By using Spring you can abstract and wire all the class dependencies in a single configuration file. The configuration file is often referred to as an Application Context or Bean Context file. We will create a server side Spring-based configuration file and name it as beans.xml. The following code illustrates the beans.xml configuration file: <beans xsi_schemaLocation="http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans.xsdhttp://cxf.apache.org/jaxws http://cxf.apache.org/schemas/jaxws.xsd"> <import resource="classpath:META-INF/cxf/cxf.xml" /> <import resource="classpath:META-INF/cxf/cxf-extension- soap.xml" /> <import resource="classpath:META-INF/cxf/cxf-servlet.xml" /> <jaxws:endpoint id="orderProcess" implementor="demo.order.OrderProcessImpl" address="/OrderProcess" /></beans> Let's examine the previous code and understand what it really means. It first defines the necessary namespaces. It then defines a series of <import> statements. It imports cxf.xml, cxf-extension-soap.xml, and cxf-servlet.xml. These files are Springbased configuration files that define core components of CXF. They are used to kick start CXF runtime and load the necessary infrastructure objects such as WSDL manager, conduit manager, destination factory manager, and so on. The <jaxws:endpoint> element in the beans.xml file specifies the OrderProcess web service as a JAX-WS  endpoint. The element is defined with the following three attributes: id—specifies a unique identifier for a bean. In this case, jaxws:endpoint is a bean, and the id name is orderProcess. implementor— specifies the actual web service implementation class. In this case, our implementor class is OrderProcessImpl. address— specifies the URL address where the endpoint is to be published. The URL address must to be relative to the web context. For our example, the endpoint will be published using the relative path /OrderProcess. The <jaxws:endpoint> element signifies that the CXF internally uses JAX-WS frontend to publish the web service. This element definition provides a short and convenient way to publish a web service. A developer need not have to write any Java class to publish a web service.
Read more
  • 0
  • 0
  • 3329

article-image-faq-web-services-and-apache-axis2
Packt
28 Feb 2011
12 min read
Save for later

FAQ on Web Services and Apache Axis2

Packt
28 Feb 2011
12 min read
Apache Axis2 Web Services, 2nd Edition Create secure, reliable, and easy-to-use web services using Apache Axis2. Extensive and detailed coverage of the enterprise ready Apache Axis2 Web Services / SOAP / WSDL engine. Attain a more flexible and extensible framework with the world class Axis2 architecture. Learn all about AXIOM - the complete XML processing framework, which you also can use outside Axis2. Covers advanced topics like security, messaging, REST and asynchronous web services. Written by Deepal Jayasinghe, a key architect and developer of the Apache Axis2 Web Service project; and Afkham Azeez, an elected ASF and PMC member.      Q: How did SOA change the world view? A: The era of isolated computers is over. Now "connected we stand, isolated we fall" is becoming the motto of computing. Networking and communication facilities have connected the world in a way as never before. The world has hardware that could support the systems that connect thousands of computers, and these systems have the capacity to wield power that was once only dreamed of. Yet, computer science lacked the technologies and abstraction to utilize the established communication networks. The goal of distributed computing is to provide such abstractions. RPC, RMI, IIOP, and CORBA are a few proposals that provide abstractions over the network for the developers to build upon. These proposals fail to consider one critical nature of the problem. The systems are a composition of numerous heterogeneous subsystems, but these proposals require all the participants to share a programming language or a few languages. Service Oriented Architecture (SOA) provides the answer by defining a set of concepts and patterns to integrate homogenous and heterogeneous components together. SOA provides a better way to achieve loosely coupled systems, and hence more extensibility and flexibility. In addition, similar to object-oriented programming (OOP), SOA enables a high degree of reusability. There are three main ways one can enable SOA capabilities in their systems and applications: Existing messaging systems: for example, JMS, IBM MQSeries, Tibco, and so on Plain Old XML (POX): for example, REST, XML/HTTP and so on Web services: for example, SOAP, WSDL, WS-* Q: What are the shortcomings of Java Messaging Service (JMS)? A: Among the commonly used messaging systems, Java Messaging Service (JMS) plays a major role in the industry and has become a common API for messaging systems. We can find a number of different message types of JMS, such as Text, Bytes, Name-Value pair, Stream, and Object. One of the main disadvantages of these types of messaging systems is that they do not have a single wire format (serialization format). As a result, interoperability is a big issue: if two applications are using JMS to communicate, then they must be on the same implementation. Sonic, Tibco, and IBM are the leaders in the commercial markets, and JBoss, Manta, and ActiveMQ are the commonly used open source implementations. Q: What is POX and how does it serve the web? A: Plain Old XML or POX is another way of exposing functionality and enabling SOA in the system. With the widespread use of the Web, the POX approach has become more popular. Most of the web applications expose the XML APIs, where we can develop components and communicate with them. Google Maps, Auto complete, and Amazon services are a few examples of applications that heavily use XML APIs to expose the functionality. In most cases, POX is used in combination with REST (Representational State Transfer). REST is a model of an underlying architecture of the Web, and it is based on the concept that every URL identifies resources. GET, PUT, POST, and DELETE are the verbs that are used in the REST architecture. REST is often associated with the theoretical standpoints, and for this reason, REST is generally not used for complex interactions. Q: What are web services? A: The fundamental concept behind web services is the SOA where an application is no longer a large monolithic program, but it is divided into smaller, loosely coupled programs. The provided services are loosely coupled together with standardized and well-defined interfaces. These loosely coupled programs make the architecture very extensible due to the possibility to add or remove services with limited costs. Therefore, new services can be created by combining existing services. To understand loose coupling clearly, it is better to understand the opposite, which is tight coupling, and its problems: Errors, delays, and downtime spread through the system The resilience of the whole system is based on the weakest part Cost of upgrading or migrating spreads It's hard to evaluate the useful parts from the dead weight The benefits a web service provides are listed below: Increased interoperability, resulting in lower maintenance costs Increased reusability and composablity (for example, use publicly available services and reuse them or integrate them to provide new services) Increased competition among vendors, resulting in lower product costs Easy transition from one product to another, resulting in lower training costs Greater degree of adoption and longevity for a standard, a large degree of usage from vendors and users leading to a higher degree of acceptance Q: What contributes to the popularity of web services? A: Among the three commonly used methods to enable SOA, a web service can be considered as the most standard and flexible way. Web services extend the idea of POX and add additional standards to make the communication more organized and standardized. There are several reasons behind the web services being the most popular SOA-enabled mechanism, as stated here: Web services are described using WSDL, and WSDL can capture any complex application and the required quality of services. Web services use SOAP as the message transmission mechanism, as SOAP is a special type of XML. It gains all the extensibility features from XML. There are a number of standard bodies to create and enforce the standards for web services. There are multiple open source and commercial web service implementations. By using the standards and procedures, web services provide application and programming language-independent mechanism to integrate and communicate. Different programming languages may define different implementations for web services, yet they interoperate because they all agree on the format of the information they share. Q: What are the standard bodies for web services? A: In web services, there are three main standard bodies that helped to improve the interoperability, quality of service, and base standards: WS-I OASIS W3C Q: How do organizations move into web services? A: There are three ways in which an organization could possibly use to move into the web services, listed next: Create a new web service from scratch. The developer creates the functionalities of the services as well as the description (i.e., WSDL). Expose the existing functionality through a web service. Here the functionalities of the service already exist. Only the service description needs to be implemented. Integrate web services from other vendors or business partners. There are occasions when using a service implemented by another is more cost effective than building from the scratch. On these occasions, the organization will need to integrate others' or even business partners' web services. The real usage of web service concepts is for the second and third methods, which enables other web services and applications to use the existing applications. Web services describe a new model for using the web; the model allows publication of business functions to the Web and provides universal access to those business functions. Both developers and end users benefit from web services. The web service model simplifies business application development and interoperation. Q: How does a Web services model look like? A: Web service model consists of a set of basic functionalities such as describe, publish, discover, bind, invoke, update, and unpublish. In the meantime, the model also consists of three actors—service provider, service broker, and service requester. Both the functionalities as well as actors are shown in the next figure. Service provider is the individual (organization) that provides the service. The service provider's job is to create, publish, maintain, and unpublish their services. From a business point of view, a service provider is the owner of the service. From an architectural view, a service provider is the platform that holds the implementation of the service. Google API, Yahoo! Financial services, Amazon Services, and Weather services are some examples of service providers. Service broker provides a repository of service descriptions (WSDL). These descriptions are published by the service provider. Service requesters will search the repository to identify the required service and obtain the binding information for these services. Service broker can be either public, where the services are universally accessible, or private, where only a specified set of service requesters are able to access the service. Service requester is the party that is looking for a service to fulfill its requirements. A requester could be a human accessing the service or an application program (a program could also be a service). From a business view, this is the business that wants to fulfill a particular service. From an architectural view, this is the application that is looking for and invoking a service. Q: What are web services standards? A: So far we have discussed SOA, standard bodies of web services, and the web service model. Here, we are going to discuss more about standards, which make web services more usable and flexible. In the past few years, there has been a significant growth in the usage of web services as application integration mechanism. As mentioned earlier, a web service is different from other SOA exposing mechanisms because it consists of various standards to address issues encountered in the other two mechanisms. The growing collection of WS-* (for example, Web Service security, Web Service reliable messaging, Web Service addressing, and others) standards, supervised by the web services governing bodies, define the web service protocol stack shown in the following figure. Here we will be looking at the standards that have been specified in the most basic layers: messaging and description, and discovery. The messaging standards are intended to give the framework for exchanging information in a distributed environment. These standards have to be reliable so that the message will be sent only once and only the intended receiver will receive it. This is one of the primary areas where research is being conducted, as everything depends on the messaging ability. Q: Describe the web services standards, XML-RPC and SOAP? A: The web services standards; XML-RPC and SOAP are described below. XML-RPC: The XML-RPC standard was created by Dave Winer in 1998 with Microsoft. That time the existing RPC systems were very bulky. Therefore, to create a light-weight system, the developer simplified it by specifying only the essentials and defined only a handful of data types and commands. This protocol uses XML to encode its calls to HTTP as a transport mechanism. The message is sent as a POST request in which the body of the request is in XML. A procedure is executed on the server and the value it returns is also formatted into XML. The parameters can be scalars, numbers, strings, dates, as well as complex record and list structures. As new functionalities were introduced, XML-RPC evolved into what is now known as SOAP, which is discussed next. Still, some people prefer using XML-RPC because of its simplicity, minimalism, and the ease of use. SOAP: The concept of SOAP is a stateless, one-way message exchange. However, applications can create more complex interaction patterns—such as request-response, request-multiple responses, and so on—by combining such one-way exchanges with features provided by an underlying protocol and application-specific information. SOAP is silent on the semantics of any application-specific data it conveys as it is on issues such as routing of SOAP messages, reliable data transfer, firewall traversal, and so on. However, SOAP provides the framework by which application-specific information may be conveyed in an extensible manner. The developers had chosen XML as the standard message format because of its widespread use by major organizations and open source initiatives. Also, there is a wide variety of freely available tools that ease the transition to a SOAP-based implementation. Q: Define the scope of Web Services Addressing (WS-Addressing)? A: The standard provides transport independent mechanisms to address messages and identifies web services, corresponding to the concepts of address and message correlation described in the web services architecture. The standard defines XML elements to identify web services endpoints and to secure end-to-end endpoint identification in messages. This enables messaging systems to support message transmission through networks that include processing nodes such as endpoint managers, firewalls, and gateways in a transport-neutral manner. Thus, WS-Addressing enables organizations to build reliable and interoperable web service applications by defining a standard mechanism for identifying and exchanging Web Services messages between multiple end points. Q: What is Web Services Description Language (WSDL)? A: WSDL developed by IBM, Ariba, and Microsoft is an XML-based language that provides a model for describing web services. The standard defines services as network endpoints or ports. WSDL is normally used in combination with SOAP and XML schema to provide web services over networks. A service requester who connects to a web service can read the WSDL to determine what functions are available in the web service. Special data types are embedded in the WSDL file in the form of XML Schema. The client can then use SOAP to call functions listed in the WSDL. The standard enables one to separate the description of the abstract functionality offered by a service from the concrete details of a service description such as how and where that functionality is offered. This specification defines a language for describing the abstract functionality of a service as well as a framework for describing the concrete details of a service description. The abstract definition of ports and messages is separated from their concrete use, allowing the reuse of these definitions.
Read more
  • 0
  • 0
  • 3323
article-image-php-magic-features
Packt
12 Oct 2009
5 min read
Save for later

PHP Magic Features

Packt
12 Oct 2009
5 min read
In this article by Jani Hartikainen, we'll look at PHP's "magic" features: Magic methods, which are class methods with specific names, are used to perform various specialized tasks. They are grouped into two: overloading methods and non-overloading methods. Overloading magic methods are used when your code attempts to access a method or a property which does not exist. Non-overloading methods perform other tasks. Magic functions, which are similar to magic methods, but are just plain functions outside any class. Currently there is only one magic function in PHP. Magic constants, which are similar to constants in notation, but act more like "dynamic" constants - their value depends on where you use them. We'll also look at some practical examples of using some of these, and lastly we'll check out what new features PHP 5.3 is going to add. Magic methods For starters, let's take a look at the magic methods PHP provides. We will first go over the non-overloading methods. __construct and __destruct class SomeClass { public function __construct() { } public function __destruct() { }} The most common magic method in PHP is __construct. In fact, you might not even have thought of it as a magic method at all, as it's so common.  __construct is the class constructor method, which gets called when you instantiate a new object using the new keyword, and any parameters used will get passed to __construct. $obj = new SomeClass(); __destruct is __construct's "pair". It is a class destructor, which is rarely used in PHP, but still it is good to know about  its existence. It gets called when your object falls out of scope or is garbage collected. function someFunc() { $obj = new SomeClass(); //when the function ends, $obj falls out of scope and SomeClass __destruct is called } someFunc(); If you make the constructor private or protected, it means that the class cannot be instantiated, except inside a method of the same class. You can use this to your advantage, for example to create a singleton. __clone class SomeClass { public $someValue; public function __clone() { $clone = new SomeClass(); $clone->someValue = $this->someValue; return $clone; }} The __clone method is called when you use PHP's clone keyword, and is used to create a clone of the object. The purpose is that by implementing __clone, you can define a way to copy objects. $obj1 = new SomeClass();$obj1->someValue = 1;$obj2 = clone $obj1;echo $obj2->someValue;//echos 1 Important: __clone is not the same as =. If you use = to assign an object to another variable, the other variable will still refer to the same object as the first one! If you use the clone keyword, the purpose is to return a new object with similar state as the original. Consider the following: $obj1 = new SomeClass();$obj1->someValue = 1;$obj2 = $obj1;$obj3 = clone $obj1;$obj1->someValue = 2; What are the values of the someValue property in $obj2 and $obj3 now? As we have used the assign operator to create $obj2, it refers to the same object as $obj1, thus $obj2->someValue is 2. When creating $obj3, we have used the clone keyword, so the __clone method was called. As __clone creates a new instance, $obj3->someValue is still the same as it was when we cloned $obj1: 1. If you want to disable cloning, you can make __clone private or protected. __toString class SomeClass { public function __toString() { return 'someclass'; }} The __toString method is called when PHP needs to convert class instances into strings, for example when echoing: $obj = new SomeClass();echo $obj;//will output 'someclass' This can be a useful example to help you identify objects or when creating lists. If we have a user object, we could define a __toString method which outputs the user's first and last names, and when we want to create a list of users, we could simply echo the objects themselves. __sleep and __wakeup class SomeClass { private $_someVar; public function __sleep() { return array('_someVar'); } public function __wakeup() { }} These two methods are used with PHP's serializer: __sleep is called with serialize(), __wakeup is called with unserialize(). Note that you will need to return an array of the class variables you want to save from __sleep. That's why the example class returns an array with _someVar in it: Without it, the variable will not get serialized. $obj = new SomeClass();$serialized = serialize($obj);//__sleep was calledunserialize($serialized);//__wakeup was called You typically won't need to implement __sleep and __wakeup, as the default implementation will serialize classes correctly. However, in some special cases it can be useful. For example, if your class stores a reference to a PDO object, you will need to implement __sleep, as PDO objects cannot be serialized. As with most other methods, you can make __sleep private or protected to stop serialization. Alternatively, you can throw an exception, which may be a better idea as you can provide a more meaningful error message. An alternative to __sleep and __wakeup is the Serializable interface. However, as its behavior is different from these two methods, the interface is outside the scope of this article. You can find info on it in the PHP manual. __set_state class SomeClass { public $someVar; public static function __set_state($state) { $obj = new SomeClass(); $obj->someVar = $state['someVar']; return $obj; }} This method is called in code created by var_export. It gets an array as its parameter, which contains a key and value for each of the class variables, and it must return an instance of the class. $obj = new SomeClass();$obj->someVar = 'my value';var_export($obj); This code will output something along the lines of: SomeClass::__set_state(array('someVar'=>'my value')); Note that var_export will also export private and protected variables of the class, so they too will be in the array.
Read more
  • 0
  • 0
  • 3314

article-image-moodle-developing-interactive-timeline-widget
Packt
21 Oct 2011
5 min read
Save for later

Moodle: Developing an Interactive Timeline Widget

Packt
21 Oct 2011
5 min read
(For more resources on Moodle, see here.) Introducing the SIMILE timeline widget The Massachusetts Institute of Technology (MIT) has developed various visualization and data manipulation tools as part of the SIMILE project. One of these is a free/open source timeline JavaScript widget, which takes time-based data as input and creates an interactive timeline that scrolls from left to right and contains popup panes and links. A timeline for the life and career of Monet is as follows: You can view more examples on the web site, http://simile-widgets.org/timeline/. In order to use the timeline widget we need these components: The Moodle timeline filter, containing the SIMILE timeline Javascript libraries A timeline data file, in XML or JSON (Javascript Object Notation) format Photographs to show in the popup panes A web page to host the timeline We will deal with installing the filter later, but first we must decide on the subject for our timeline. If you visit the home of SIMILE, http://simile-widgets.org/timeline/, you will be able to explore timelines for the assassination of John F. Kennedy, the life of Claude Monet, and other examples. Timelines can be granular to the minute or hour, as in the case of the assassination. Or they can be spread over centuries or millennia—this is currently the limit for the widget. A suitable subject for our young audience would be significant or important inventions. This can encompass subjects as diverse as printing, paper, penicillin, steam, and the computer. And these inventions originate in different parts of the world, which adds an extra dimension to the subject. Now that we have our subject, a search on Google reveals some useful links, including a list of the top 10 inventions, http://listverse.com/2007/09/13/top-10-greatestinventions/. We may or may not agree with a list like this, but it acts as a useful starting point and aide memoire. There are pictures here, and more information and images on other websites including Wikipedia. To progress from ideas to our timeline, we need to create an XML data file. One of the easier tools to help us produce the XML is a syntax highlighting text editor, which we will install now. If you already have a text editor on your computer that you think is suitable, skip this section. Installing a text editor We have our subject, so the next step is to install some editing software to help us create our XML timeline file. Though most operating systems including Windows contain simple text editors, it will be helpful to install an editor that features syntax highlighting for various computer languages including XML. Some general-purpose text editors are: Notepad2 for Windows, a simple open-source editor, distributed under a BSD license (http://opensource.org/licenses/bsd-license.php). It is a small download, less than 300 kilobytes, from http://flos-freeware.ch/notepad2.html. Notepad++ for Windows, distributed under a GPL license. Download it from http://notepad-plus-plus.org. TextWrangler for Mac OS X, distributed under a Freeware license (not open-source), http://barebones.com/products/textwrangler/. Most flavors of Linux and Unix include the vi or vim text editor. Or use Emacs or your favorite editor. We will carry on and install Notepad2 for the examples in this article. Time for action – installing Notepad2 To install Notepad2 on Windows, visit the website http://flos-freeware.ch/notepad2.html in your browser and follow these steps: Under the Downloads section, find the link to the notepad2.zip file. Download it to your computer. Open the file notepad2.zip with the Unzip program available on your computer (the one built-in to Windows XP or later. Or Winzip, 7zip, or similar). Extract all the files in notepad2.zip to a directory on your hard drive, for example, C:apps2Notepad2. In your new directory, click on the file Notepad2.exe with your right mouse button. Choose Create Shortcut from the context menu that appears. Name the shortcut Notedpad2, and copy it to the clipboard. Now in Windows Explorer on Windows 7, go to the directory C:Users{USER}SendTo (or on Windows XP, go to the directory C:Documents and Settings{USER}SendTo). In each case {USER} is your username. Paste a copy of the shortcut in the directory. (Note, you will probably see a shortcut for Notepad, the basic editor included in Windows in this directory. You may want to delete it to avoid confusion.) Go to your Desktop and paste the shortcut there as well. Whew! Notepad2 is a little fiddly to set up, but don't worry, its easy to use. What just happened? We downloaded and installed a text editor for Windows called Notepad2. This has a feature called syntax highlighting, which as we will see helps us to write the XML file for our timeline widget. Note that the shortcuts we created are useful in different contexts. The SendTo shortcut is useful when we wish to edit a file—simply select it in Windows Explorer, right-click, and choose SendTo | Notepad2 in the context menu. The Desktop shortcut is useful for creating new files. We also found that there are many alternatives for Windows, Mac OS X, and other operating systems. These included Notepad++ for Windows, TextWrangler for Mac OS X, and vi/vim and Emacs for Linux. We have added a shortcut for the editor to the SendTo menu in Windows, as shown below: Now that we have a text editor we can create the timeline XML.
Read more
  • 0
  • 0
  • 3295

article-image-applications-webrtc
Packt
27 Feb 2015
20 min read
Save for later

Applications of WebRTC

Packt
27 Feb 2015
20 min read
This article is by Andrii Sergiienko, the author of the book WebRTC Cookbook. WebRTC is a relatively new and revolutionary technology that opens new horizons in the area of interactive applications and services. Most of the popular web browsers support it natively (such as Chrome and Firefox) or via extensions (such as Safari). Mobile platforms such as Android and iOS allow you to develop native WebRTC applications. In this article, we will cover the following recipes: Creating a multiuser conference using WebRTCO Taking a screenshot using WebRTC Compiling and running a demo for Android (For more resources related to this topic, see here.) Creating a multiuser conference using WebRTCO In this recipe, we will create a simple application that supports a multiuser videoconference. We will do it using WebRTCO—an open source JavaScript framework for developing WebRTC applications. Getting ready For this recipe, you should have a web server installed and configured. The application we will create can work while running on the local filesystem, but it is more convenient to use it via the web server. To create the application, we will use the signaling server located on the framework's homepage. The framework is open source, so you can download the signaling server from GitHub and install it locally on your machine. GitHub's page for the project can be found at https://github.com/Oslikas/WebRTCO. How to do it… The following recipe is built on the framework's infrastructure. We will use the framework's signaling server. What we need to do is include the framework's code and do some initialization procedure: Create an HTML file and add common HTML heads: <!DOCTYPE html> <html lang="en"> <head>     <meta charset="utf-8"> Add some style definitions to make the web page looking nicer:     <style type="text/css">         video {             width: 384px;             height: 288px;             border: 1px solid black;             text-align: center;         }         .container {             width: 780px;             margin: 0 auto;         }     </style> Include the framework in your project: <script type="text/javascript" src ="https://cdn.oslikas.com/js/WebRTCO-1.0.0-beta-min.js"charset="utf-8"></script></head> Define the onLoad function—it will be called after the web page is loaded. In this function, we will make some preliminary initializing work: <body onload="onLoad();"> Define HTML containers where the local video will be placed: <div class="container">     <video id="localVideo"></video> </div> Define a place where the remote video will be added. Note that we don't create HTML video objects, and we just define a separate div. Further, video objects will be created and added to the page by the framework automatically: <div class="container" id="remoteVideos"></div> <div class="container"> Create the controls for the chat area: <div id="chat_area" style="width:100%; height:250px;overflow: auto; margin:0 auto 0 auto; border:1px solidrgb(200,200,200); background: rgb(250,250,250);"></div></div><div class="container" id="div_chat_input"><input type="text" class="search-query"placeholder="chat here" name="msgline" id="chat_input"><input type="submit" class="btn" id="chat_submit_btn"onclick="sendChatTxt();"/></div> Initialize a few variables: <script type="text/javascript">     var videoCount = 0;     var webrtco = null;     var parent = document.getElementById('remoteVideos');     var chatArea = document.getElementById("chat_area");     var chatColorLocal = "#468847";     var chatColorRemote = "#3a87ad"; Define a function that will be called by the framework when a new remote peer is connected. This function creates a new video object and puts it on the page:     function getRemoteVideo(remPid) {         var video = document.createElement('video');         var id = 'remoteVideo_' + remPid;         video.setAttribute('id',id);         parent.appendChild(video);         return video;     } Create the onLoad function. It initializes some variables and resizes the controls on the web page. Note that this is not mandatory, and we do it just to make the demo page look nicer:     function onLoad() {         var divChatInput =         document.getElementById("div_chat_input");         var divChatInputWidth = divChatInput.offsetWidth;         var chatSubmitButton =         document.getElementById("chat_submit_btn");         var chatSubmitButtonWidth =         chatSubmitButton.offsetWidth;         var chatInput =         document.getElementById("chat_input");         var chatInputWidth = divChatInputWidth -         chatSubmitButtonWidth - 40;         chatInput.setAttribute("style","width:" +         chatInputWidth + "px");         chatInput.style.width = chatInputWidth + 'px';         var lv = document.getElementById("localVideo"); Create a new WebRTCO object and start the application. After this point, the framework will start signaling connection, get access to the user's media, and will be ready for income connections from remote peers: webrtco = new WebRTCO('wss://www.webrtcexample.com/signalling',lv, OnRoomReceived, onChatMsgReceived, getRemoteVideo, OnBye);}; Here, the first parameter of the function is the URL of the signaling server. In this example, we used the signaling server provided by the framework. However, you can install your own signaling server and use an appropriate URL. The second parameter is the local video object ID. Then, we will supply functions to process messages of received room, received message, and received remote video stream. The last parameter is the function that will be called when some of the remote peers have been disconnected. The following function will be called when the remote peer has closed the connection. It will remove video objects that became outdated:     function OnBye(pid) {         var video = document.getElementById("remoteVideo_"         + pid);         if (null !== video) video.remove();     }; We also need a function that will create a URL to share with other peers in order to make them able to connect to the virtual room. The following piece of code represents such a function: function OnRoomReceived(room) {addChatTxt("Now, if somebody wants to join you,should use this link: <ahref=""+window.location.href+"?room="+room+"">"+window.location.href+"?room="+room+"</a>",chatColorRemote);}; The following function prints some text in the chat area. We will also use it to print the URL to share with remote peers:     function addChatTxt(msg, msgColor) {         var txt = "<font color=" + msgColor + ">" +         getTime() + msg + "</font><br/>";         chatArea.innerHTML = chatArea.innerHTML + txt;         chatArea.scrollTop = chatArea.scrollHeight;     }; The next function is a callback that is called by the framework when a peer has sent us a message. This function will print the message in the chat area:     function onChatMsgReceived(msg) {         addChatTxt(msg, chatColorRemote);     }; To send messages to remote peers, we will create another function, which is represented in the following code:     function sendChatTxt() {         var msgline =         document.getElementById("chat_input");         var msg = msgline.value;         addChatTxt(msg, chatColorLocal);         msgline.value = '';         webrtco.API_sendPutChatMsg(msg);     }; We also want to print the time while printing messages; so we have a special function that formats time data appropriately:     function getTime() {         var d = new Date();         var c_h = d.getHours();         var c_m = d.getMinutes();         var c_s = d.getSeconds();           if (c_h < 10) { c_h = "0" + c_h; }         if (c_m < 10) { c_m = "0" + c_m; }         if (c_s < 10) { c_s = "0" + c_s; }         return c_h + ":" + c_m + ":" + c_s + ": ";     }; We have some helper code to make our life easier. We will use it while removing obsolete video objects after remote peers are disconnected:     Element.prototype.remove = function() {         this.parentElement.removeChild(this);     }     NodeList.prototype.remove =     HTMLCollection.prototype.remove = function() {         for(var i = 0, len = this.length; i < len; i++) {             if(this[i] && this[i].parentElement) {                 this[i].parentElement.removeChild(this[i]);             }         }     } </script> </body> </html> Now, save the file and put it on the web server, where it could be accessible from web browser. How it works… Open a web browser and navigate to the place where the file is located on the web server. You will see an image from the web camera and a chat area beneath it. At this stage, the application has created the WebRTCO object and initiated the signaling connection. If everything is good, you will see an URL in the chat area. Open this URL in a new browser window or on another machine—the framework will create a new video object for every new peer and will add it to the web page. The number of peers is not limited by the application. In the following screenshot, I have used three peers: two web browser windows on the same machine and a notebook as the third peer: Taking a screenshot using WebRTC Sometimes, it can be useful to be able to take screenshots from a video during videoconferencing. In this recipe, we will implement such a feature. Getting ready No specific preparation is necessary for this recipe. You can take any basic WebRTC videoconferencing application. We will add some code to the HTML and JavaScript parts of the application. How to do it… Follow these steps: First of all, add image and canvas objects to the web page of the application. We will use these objects to take screenshots and display them on the page: <img id="localScreenshot" src=""> <canvas style="display:none;" id="localCanvas"></canvas> Next, you have to add a button to the web page. After clicking on this button, the appropriate function will be called to take the screenshot from the local stream video: <button onclick="btn_screenshot()" id="btn_screenshot">Make a screenshot</button> Finally, we need to implement the screenshot taking function: function btn_screenshot() { var v = document.getElementById("localVideo"); var s = document.getElementById("localScreenshot"); var c = document.getElementById("localCanvas"); var ctx = c.getContext("2d"); Draw an image on the canvas object—the image will be taken from the video object: ctx.drawImage(v,0,0); Now, take reference of the canvas, convert it to the DataURL object, and insert the value into the src option of the image object. As a result, the image object will show us the taken screenshot: s.src = c.toDataURL('image/png'); } That is it. Save the file and open the application in a web browser. Now, when you click on the Make a screenshot button, you will see the screenshot in the appropriate image object on the web page. You can save the screenshot to the disk using right-click and the pop-up menu. How it works… We use the canvas object to take a frame of the video object. Then, we will convert the canvas' data to DataURL and assign this value to the src parameter of the image object. After that, an image object is referred to the video frame, which is stored in the canvas. Compiling and running a demo for Android Here, you will learn how to build a native demo WebRTC application for Android. Unfortunately, the supplied demo application from Google doesn't contain any IDE-specific project files, so you will have to deal with console scripts and commands during all the building process. Getting ready We will need to check whether we have all the necessary libraries and packages installed on the work machine. For this recipe, I used a Linux box—Ubuntu 14.04.1 x64. So all the commands that might be specific for OS will be relevant to Ubuntu. Nevertheless, using Linux is not mandatory and you can take Windows or Mac OS X. If you're using Linux, it should be 64-bit based. Otherwise, you most likely won't be able to compile Android code. Preparing the system First of all, you need to install the necessary system packages: sudo apt-get install git git-svn subversion g++ pkg-config gtk+-2.0libnss3-dev libudev-dev ant gcc-multilib lib32z1 lib32stdc++6 Installing Oracle JDK By default, Ubuntu is supplied with OpenJDK, but it is highly recommended that you install an Oracle JDK. Otherwise, you can face issues while building WebRTC applications for Android. One another thing that you should keep in mind is that you should probably use Oracle JDK version 1.6—other versions (in particular, 1.7 and 1.8) might not be compatible with the WebRTC code base. This will probably be fixed in the future, but in my case, only Oracle JDK 1.6 was able to build the demo successfully. Download the Oracle JDK from its home page at http://www.oracle.com/technetwork/java/javase/downloads/index.html. In case there is no download link on such an old JDK, you can try another URL: http://www.oracle.com/technetwork/java/javasebusiness/downloads/java-archive-downloads-javase6-419409.html. Oracle will probably ask you to sign in or register first. You will be able to download anything from their archive. Install the downloaded JDK: sudo mkdir –p /usr/lib/jvmcd /usr/lib/jvm && sudo /bin/sh ~/jdk-6u45-linux-x64.bin --noregister Here, I assume that you downloaded the JDK package into the home directory. Register the JDK in the system: sudo update-alternatives --install /usr/bin/javac javac /usr/lib/jvm/jdk1.6.0_45/bin/javac 50000 sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk1.6.0_45/bin/java 50000 sudo update-alternatives --config javac sudo update-alternatives --config java cd /usr/lib sudo ln -s /usr/lib/jvm/jdk1.6.0_45 java-6-sun export JAVA_HOME=/usr/lib/jvm/jdk1.6.0_45/ Test the Java version: java -version You should see something like Java HotSpot on the screen—it means that the correct JVM is installed. Getting the WebRTC source code Perform the following steps to get the WebRTC source code: Download and prepare Google Developer Tools:Getting the WebRTC source code mkdir –p ~/dev && cd ~/dev git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git export PATH=`pwd`/depot_tools:"$PATH" Download the WebRTC source code: gclient config http://webrtc.googlecode.com/svn/trunk echo "target_os = ['android', 'unix']" >> .gclient gclient sync The last command can take a couple of minutes (actually, it depends on your Internet connection speed), as you will be downloading several gigabytes of source code. Installing Android Developer Tools To develop Android applications, you should have Android Developer Tools (ADT) installed. This SDK contains Android-specific libraries and tools that are necessary to build and develop native software for Android. Perform the following steps to install ADT: Download ADT from its home page http://developer.android.com/sdk/index.html#download. Unpack ADT to a folder: cd ~/dev unzip ~/adt-bundle-linux-x86_64-20140702.zip Set up the ANDROID_HOME environment variable: export ANDROID_HOME=`pwd`/adt-bundle-linux-x86_64-20140702/sdk How to do it… After you've prepared the environment and installed the necessary system components and packages, you can continue to build the demo application: Prepare Android-specific build dependencies: cd ~/dev/trunk source ./build/android/envsetup.sh Configure the build scripts: export GYP_DEFINES="$GYP_DEFINES build_with_libjingle=1 build_with_chromium=0 libjingle_java=1 OS=android"gclient runhooks Build the WebRTC code with the demo application: ninja -C out/Debug -j 5 AppRTCDemo After the last command, you can find the compiled Android packet with the demo application at ~/dev/trunk/out/Debug/AppRTCDemo-debug.apk. Running on the Android simulator Follow these steps to run an application on the Android simulator: Run Android SDK manager and install the necessary Android components: $ANDROID_HOME/tools/android sdk Choose at least Android 4.x—lower versions don't have WebRTC support. In the following screenshot, I've chosen Android SDK 4.4 and 4.2: Create an Android virtual device: cd $ANDROID_HOME/tools ./android avd & The last command executes the Android SDK tool to create and maintain virtual devices. Create a new virtual device using this tool. You can see an example in the following screenshot: Start the emulator using just the created virtual device: ./emulator –avd emu1 & This can take a couple of seconds (or even minutes), after that you should see a typical Android device home screen, like in the following screenshot: Check whether the virtual device is simulated and running: cd $ANDROID_HOME/platform-tools ./adb devices You should see something like the following: List of devices attached emulator-5554   device This means that your just created virtual device is OK and running; so we can use it to test our demo application. Install the demo application on the virtual device: ./adb install ~/dev/trunk/out/Debug/AppRTCDemo-debug.apk You should see something like the following: 636 KB/s (2507985 bytes in 3.848s) pkg: /data/local/tmp/AppRTCDemo-debug.apk Success This means that the application is transferred to the virtual device and is ready to be started. Switch to the simulator window; you should see the demo application's icon. Execute it like it is a real Android device. In the following screenshot, you can see the installed demo application AppRTC: While trying to launch the application, you might see an error message with a Java runtime exception referring to GLSurfaceView. In this case, you probably need to switch to the Use Host GPU option while creating the virtual device with Android Virtual Device (AVD) tool. Fixing a bug with GLSurfaceView Sometimes if you're using an Android simulator with a virtual device on the ARM architecture, you can be faced with an issue when the application says No config chosen, throws an exception, and exits. This is a known defect in the Android WebRTC code and its status can be tracked at https://code.google.com/p/android/issues/detail?id=43209. The following steps can help you fix this bug in the original demo application: Go to the ~/dev/trunk/talk/examples/android/src/org/appspot/apprtc folder and edit the AppRTCDemoActivity.java file. Look for the following line of code: vsv = new AppRTCGLView(this, displaySize); Right after this line, add the following line of code: vsv.setEGLConfigChooser(8,8,8,8,16,16); You will need to recompile the application: cd ~/dev/trunk ninja -C out/Debug AppRTCDemo  Now you can deploy your application and the issue will not appear anymore. Running on a physical Android device For deploying applications on an Android device, you don't need to have any developer certificates (like in the case of iOS devices). So if you have an Android physical device, it probably would be easier to debug and run the demo application on the device rather than on the simulator. Connect the Android device to the machine using a USB cable. On the Android device, switch the USB debug mode on. Check whether your machine sees your device: cd $ANDROID_HOME/platform-tools ./adb devices If device is connected and the machine sees it, you should see the device's name in the result print of the preceding command: List of devices attached QO4721C35410   device Deploy the application onto the device: cd $ANDROID_HOME/platform-tools ./adb -d install ~/dev/trunk/out/Debug/AppRTCDemo-debug.apk You will get the following output: 3016 KB/s (2508031 bytes in 0.812s) pkg: /data/local/tmp/AppRTCDemo-debug.apk Success After that you should see the AppRTC demo application's icon on the device: After you have started the application, you should see a prompt to enter a room number. At this stage, go to http://apprtc.webrtc.org in your web browser on another machine; you will see an image from your camera. Copy the room number from the URL string and enter it in the demo application on the Android device. Your Android device and another machine will try to establish a peer-to-peer connection, and might take some time. In the following screenshot, you can see the image on the desktop after the connection with Android smartphone has been established: Here, the big image represents what is translated from the frontal camera of the Android smartphone; the small image depicts the image from the notebook's web camera. So both the devices have established direct connection and translate audio and video to each other. The following screenshot represents what was seen on the Android device: There's more… The original demo doesn't contain any ready-to-use IDE project files; so you have to deal with console commands and scripts during all the development process. You can make your life a bit easier if you use some third-party tools that simplify the building process. Such tools can be found at http://tech.pristine.io/build-android-apprtc. Summary In this article, we have learned to create a multiuser conference using WebRTCO, take a screenshot using WebRTC, and compile and run a demo for Android. Resources for Article: Further resources on this subject: Webrtc with Sip and Ims? [article] Using the Webrtc Data Api [article] Applying Webrtc for Education and E Learning [article]
Read more
  • 0
  • 0
  • 3293
article-image-building-queries
Packt
12 Dec 2013
10 min read
Save for later

Building Queries

Packt
12 Dec 2013
10 min read
(For more resources related to this topic, see here.) Understanding DQL DQL is the acronym of Doctrine Query Language. It's a domain-specific language that is very similar to SQL, but is not SQL. Instead of querying the database tables and rows, DQL is designed to query the object model's entities and mapped properties. DQL is inspired by and similar to HQL, the query language of Hibernate, a popular ORM for Java. For more details you can visit this website: http://www.hibernate.org/. Learn more about domain-specific languages at: http://en.wikipedia.org/wiki/Domain-specific_language To better understand what it means, let's run our first DQL query. Doctrine command-line tools are as genuine as a Swiss Army knife. They include a command called orm:run-dql that runs the DQL query and displays it's result. Use it to retrieve title and all the comments of the post with 1 as an identifier: php vendor/bin/doctrine.php orm:run-dql "SELECT p.title,c.bodyFROM BlogEntityPost p JOIN p.comments c WHERE p.id=1" It looks like a SQL query, but it's definitely not a SQL query. Examine the FROM and the JOIN clauses; they contain the following aspects: A fully qualified entity class name is used in the FROM clause as the root of the query All the Comment entities associated with the selected Post entities are joined, thanks to the presence of the comments property of the Post entity class in the JOIN clause As you can see, data from the entities associated with the main entity can be requested in an object-oriented way. Properties holding the associations (on the owning or the inverse side) can be used in the JOIN clause. Despite some limitations (especially in the field of subqueries), DQL is a powerful and flexible language to retrieve object graphs. Internally, Doctrine parses the DQL queries, generates and executes them through Database Abstraction Layer (DBAL) corresponding to the SQL queries, and hydrates the data structures with results. Until now, we only used Doctrine to retrieve the PHP objects. Doctrine is able to hydrate other types of data structures, especially arrays and basic types. It's also possible to write custom hydrators to populate any data structure. If you look closely at the return of the previous call of orm:run-dql, you'll see that it's an array, and not an object graph, that has been hydrated. As with all the topics covered in this book, more information about built-in hydration modes and custom hydrators is available in the Doctrine documentation on the following website: http://docs.doctrine-project.org/en/latest/reference/dql-doctrine-query-language.html#hydration-modes Using the entity repositories Entity repositories are classes responsible for accessing and managing entities. Just like entities are related to the database rows, entity repositories are related to the database tables. All the DQL queries should be written in the entity repository related to the entity type they retrieve. It hides the ORM from other components of the application and makes it easier to re-use, refactor, and optimize the queries. Doctrine entity repositories are an implementation of the Table Data Gateway design pattern. For more details, visit the following website: http://martinfowler.com/eaaCatalog/tableDataGateway.html A base repository, available for every entity, provides useful methods for managing the entities in the following manner: find($id): It returns the entity with $id as an identifier or null It is used internally by the find() method of the Entity Managers. findAll(): It retrieves an array that contains all the entities in this repository findBy(['property1' => 'value', 'property2' => 1], ['property3' => 'DESC', 'property4' => 'ASC']): It retrieves an array that contains entities matching all the criteria passed in the first parameter and ordered by the second parameter findOneBy(['property1' => 'value', 'property2' => 1]): It is similar to findBy() but retrieves only the first entity or null if none of the entities match the criteria Entity repositories also provide shortcut methods that allow a single property to filter entities. They follow this pattern: findBy*() and findOneBy*(). For instance, calling findByTitle('My title') is equivalent to calling findBy(['title' => 'My title']). This feature uses the magical __call() PHP method. For more details visit the following website: http://php.net/manual/en/language.oop5.overloading.php#object.call In our blog app, we want to display comments in the detailed post view, but it is not necessary to fetch them from the list of posts. Eager loading through the fetch attribute is not a good choice for the list, and Lazy loading slows down the detailed view. A solution to this would be to create a custom repository with extra methods for executing our own queries. We will write a custom method that collates comments in the detailed view. Creating custom entity repositories Custom entity repositories are classes extending the base entity repository class provided by Doctrine. They are designed to receive custom methods that run the DQL queries. As usual, we will use the mapping information to tell Doctrine to use a custom repository class. This is the role of the repositoryClass attribute of the @Entity annotation. Kindly perform the following steps to create a custom entity repository: Reopen the Post.php file at the src/Blog/Entity/ location and add a repositoryClass attribute to the existing @Entity annotation like the following line of code: @Entity(repositoryClass="PostRepository") Doctrine command-line tools also provide an entity repository generator. Type the following command to use it: php vendor/bin/doctrine.php orm:generate:repositories src/ Open this new empty custom repository, which we just generated in the PostRepository.phpPostRepository.php file, at the src/Blog/Entity/ location. Add the following method for retrieving the posts and comments: /** * Finds a post with its comments * * @param int $id * @return Post */ public function findWithComments($id) { return $this ->createQueryBuilder('p') ->addSelect('c') ->leftJoin('p.comments', 'c') ->where('p.id = :id') ->orderBy('c.publicationDate', 'ASC') ->setParameter('id', $id) ->getQuery() ->getOneOrNullResult() ; } Our custom repository extends the default entity repository provided by Doctrine. The standard methods, described earlier in the article, are still available. Getting started with Query Builder QueryBuilder is an object designed to help build the DQL queries through a PHP API with a fluent interface. It allows us to retrieve the generated DQL queries through the getDql() method (useful for debugging) or directly use the Query object (provided by Doctrine). To increase performance, QueryBuilder caches the generated DQL queries and manages an internal state. The full API and states of the DQL query are documented on the following website: http://docs.doctrine-project.org/projects/doctrine-orm/en/latest/reference/query-builder.html We will give an in-depth explanation of the findWithComments() method that we created in the PostRepository class. Firstly, a QueryBuilder instance is created with the createQueryBuilder() method inherited from the base entity repository. The QueryBuilder instance takes a string as a parameter. This string will be used as an alias of the main entity class. By default, all the fields of the main entity class are selected and no other clauses except SELECT and FROM are populated. The leftJoin() call creates a JOIN clause that retrieves comments associated with the posts. Its first argument is the property to join and its second is the alias; these will be used in the query for the joined entity class (here, the letter c will be used as an alias for the Comment class). Unless the SQL JOIN clause is used, the DQL query automatically fetches the entities associated with the main entity. There is no need for keywords like ON or USING. Doctrine automatically knows whether a join table or a foreign-key column must be used. The addSelect() call appends comment data to the SELECT clause. The alias of the entity class is used to retrieve all the fields (this is similar to the * operator in SQL). As in the first DQL query of this article, specific fields can be retrieved with the notation alias.propertyName. You guessed it, the call to the where() method sets the WHERE part of the query. Under the hood, Doctrine uses prepared SQL statements. They are more efficient than the standard SQL queries. The id parameter will be populated by the value set by the call to setParameter(). Thanks again to prepared statements and this setParameter() method, SQL Injection attacks are automatically avoided. SQL Injection Attacks are a way to execute malicious SQL queries using user inputs that have not escaped. Let's take the following example of a bad DQL query to check if a user has a specific role: $query = $entityManager->createQuery('SELECT ur FROMUserRole urWHERE ur.username = "' . $username . '" ANDur.role = "' . $role . '"'); $hasRole = count($query->getResult()); This DQL query will be translated into SQL by Doctrine. If someone types the following username: " OR "a"="a the SQL code contained in the string will be injected and the query will always return some results. The attacker has now gained access to a private area. The proper way should be to use the following code: $query = $entityManager->createQuery("SELECT ur FROMUserRole WHEREusername = :username and role = :role"); $query->setParameters([ 'username' => $username, 'role' => $role ]); $hasRole = count($query->getResult()); Thanks to prepared statements, special characters (like quotes) contained in the username are not dangerous, and this snippet will work as expected. The orderBy() call generates an ORDER BY clause that orders results as per the publication date of the comments, older first. Most SQL instructions also have an object-oriented equivalent in DQL. The most common join types can be made using DQL; they generally have the same name. The getQuery() call tells the Query Builder to generate the DQL query (if needed, it will get the query from its cache if possible), to instantiate a Doctrine Query object, and to populate it with the generated DQL query. This generated DQL query will be as follows: SELECT p, c FROM BlogEntityPost p LEFT JOIN p.comments cWHEREp.id = :id ORDER BY c.publicationDate ASC The Query object exposes another useful method for the purpose of debugging: getSql(). As its name implies, getSql() returns the SQL query corresponding to the DQL query, which Doctrine will run on DBMS. For our DQL query, the underlying SQL query is as follows: SELECT p0_.id AS id0, p0_.title AS title1, p0_.bodyAS body2,p0_.publicationDate AS publicationDate3,c1_.id AS id4, c1_.bodyAS body5, c1_.publicationDate AS publicationDate6,c1_.post_id ASpost_id7 FROM Post p0_ LEFT JOIN Commentc1_ ON p0_.id =c1_.post_id WHERE p0_.id= ? ORDER BY c1_.publicationDate ASC The getOneOrNullResult() method executes it, retrieves the first result, and returns it as a Post entity instance (this method returns null if no result is found). Like the QueryBuilder object, the Query object manages an internal state to generate the underlying SQL query only when necessary. Performance is something to be very careful about while using Doctrine. When set in production mode, ORM is able to cache the generated queries (DQL through the QueryBuilder objects, SQL through the Query objects) and results of the queries. ORM must be configured to use one of the blazing, fast, supported systems (APC, Memcache, XCache, or Redis) as shown on the following website: http://docs.doctrine-project.org/en/latest/reference/caching.html We still need to update the view layer to take care of our new findWithComments() method. Open the view-post.php file at the web/location, where you will find the following code snippet: $post = $entityManager->getRepository('BlogEntityPost')->find($_GET['id']); Replace the preceding line of code with the following code snippet: $post = $entityManager->getRepository('BlogEntityPost')->findWithComments($_GET['id']);
Read more
  • 0
  • 0
  • 3280

Packt
09 Feb 2016
13 min read
Save for later

CSS Properties – Part 1

Packt
09 Feb 2016
13 min read
In this article written by Joshua Johanan, Talha Khan and Ricardo Zea, authors of the book Web Developer's Reference Guide, the authors wants to state that "CSS properties are characteristics of an element in a markup language (HTML, SVG, XML, and so on) that control their style and/or presentation. These characteristics are part of a constantly evolving standard from the W3C." (For more resources related to this topic, see here.) A basic example of a CSS property is border-radius: input { border-radius: 100px; } There is an incredible amount of CSS properties, and learning them all is virtually impossible. Adding more into this mix, there are CSS properties that need to be vendor prefixed (-webkit-, -moz-, -ms-, and so on), making this equation even more complex. Vendor prefixes are short pieces of CSS that are added to the beginning of the CSS property (and sometimes, CSS values too). These pieces of code are directly related to either the company that makes the browser (the "vendor") or to the CSS engine of the browser. There are four major CSS prefixes: -webkit-, -moz-, -ms- and -o-. They are explained here: -webkit-: This references Safari's engine, Webkit (Google Chrome and Opera used this engine in the past as well) -moz-: This stands for Mozilla, which creates Firefox -ms-: This stands for Microsoft, which creates Internet Explorer -o-: This stands for Opera, but only targets old versions of the browser Google Chrome and Opera both support the -webkit- prefix. However, these two browsers do not use the Webkit engine anymore. Their engine is called Blink and is developed by Google. A basic example of a prefixed CSS property is column-gap: .column { -webkit-column-gap: 5px; -moz-column-gap: 5px; column-gap: 5px; } Knowing which CSS properties need to be prefixed is futile. That's why, it's important to keep a constant eye on CanIUse.com. However, it's also important to automate the prefixing process with tools such as Autoprefixer or -prefix-free, or mixins in preprocessors, and so on. However, vendor prefixing isn't in the scope of the book, so the properties we'll discuss are without any vendor prefixes. If you want to learn more about vendor prefixes, you can visit Mozilla Developer Network (MDN) at http://tiny.cc/mdn-vendor-prefixes. Let's get the CSS properties reference rolling. Animation Unlike the old days of Flash, where creating animations required third-party applications and plugins, today, we can accomplish practically the same things with a lot less overhead, better performance, and greater scalability, all through CSS only. Forget plugins and third-party software! All we need is a text editor, some imagination, and a bit of patience to wrap our heads around some of the animation concepts CSS brings to our plate. Base markup and CSS Before we dive into all the animation properties, we will use the following markup and animation structure as our base: HTML: <div class="element"></div> CSS: .element { width: 300px; height: 300px; } @keyframes fadingColors { 0% { background: red; } 100% { background: black; } } In the examples, we will only see the element rule since the HTML and @keyframes fadingColors will remain the same. The @keyframes declaration block is a custom animation that can be applied to any element. When applied, the element's background will go from red to black. Ok, let's do this. animation-name The animation-name CSS property is the name of the @keyframes at-rule that we want to execute, and it looks like this: animation-name: fadingColors; Description In the HTML and CSS base example, our @keyframes at-rule had an animation where the background color went from red to black. The name of that animation is fadingColors. So, we can call the animation like this: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; } This is a valid rule using the longhand. There are clearly no issues with it at all. The thing is that the animation won't run unless we add animation-duration to it. animation-duration The animation-duration CSS property defines the amount of time the animation will take to complete a cycle, and it looks like this: animation-duration: 2s; Description We can specify the units either in seconds using s or in milliseconds using ms. Specifying a unit is required. Specifying a value of 0s means that the animation should actually never run. However, since we do want our animation to run, we will use the following lines of code: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; } As mentioned earlier, this will make a box go from its red background to black in 2 seconds, and then stop. animation-iteration-count The animation-iteration-count CSS property defines the number of times the animation should be played, and it looks like this: animation-iteration-count: infinite;Description Here are two values: infinite and a number, such as 1, 3, or 0.5. Negative numbers are not allowed. Add the following code to the prior example: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; } This will make a box go from its red background to black, start over again with the red background and go to black, infinitely. animation-direction The animation-direction CSS property defines the direction in which the animation should play after the cycle, and it looks like this: animation-direction: alternate; Description There are four values: normal, reverse, alternate, and alternate-reverse. normal: It makes the animation play forward. This is the default value. reverse: It makes the animation play backward. alternate: It makes the animation play forward in the first cycle, then backward in the next cycle, then forward again, and so on. In addition, timing functions are affected, so if we have ease-out, it gets replaced by ease-in when played in reverse. We'll look at these timing functions in a minute. alternate-reverse: It's the same thing as alternate, but the animation starts backward, from the end. In our current example, we have a continuous animation. However, the background color has a "hard stop" when going from black (end of the animation) to red (start of the animation). Let's create a more 'fluid' animation by making the black background fade into red and then red into black without any hard stops. Basically, we are trying to create a "pulse-like" effect: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; } animation-delay The animation-delay CSS property allows us to define when exactly an animation should start. This means that as soon as the animation has been applied to an element, it will obey the delay before it starts running. It looks like this: animation-delay: 3s; Description We can specify the units either in seconds using s or in milliseconds using ms.Specifying a unit is required. Negative values are allowed. Take into consideration that using negative values means that the animation should start right away, but it will start midway into the animation for the opposite amount of time as the negative value. Use negative values with caution. CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; } This will make the animation start after 3 seconds have passed. animation-fill-mode The animation-fill-mode CSS property defines which values are applied to an element before and after the animation. Basically, outside the time, the animation is being executed. It looks like this: animation-fill-mode: none; Description There are four values: none, forwards, backwards, and both. none: No styles are applied before or after the animation. forwards: The animated element will retain the styles of the last keyframe. This the most used value. backwards: The animated element will retain the styles of the first keyframe, and these styles will remain during the animation-delay period. This is very likely the least used value. both: The animated element will retain the styles of the first keyframe before starting the animation and the styles of the last keyframe after the animation has finished. In many cases, this is almost the same as using forwards. The prior properties are better used in animations that have an end and stop. In our example, we're using a fading/pulsating animation, so the best property to use is none. CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; } animation-play-state The animation-play-state CSS property defines whether an animation is running or paused, and it looks like this: animation-play-state: running; Description There are two values: running and paused. These values are self-explanatory. CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; } In this case, defining animation-play-state as running is redundant, but I'm listing it for purposes of the example. animation-timing-function The animation-timing-function CSS property defines how an animation's speed should progress throughout its cycles, and it looks like this: animation-timing-function: ease-out; There are five predefined values, also known as easing functions, for the Bézier curve (we'll see what the Bézier curve is in a minute): ease, ease-in, ease-out, ease-in-out, and linear. ease The ease function Sharply accelerates at the beginning and starts slowing down towards the middle of the cycle, its syntax is as follows: animation-timing-function: ease; ease-in The ease-in function starts slowly accelerating until the animation sharply ends, its syntax is as follows: animation-timing-function: ease-in; ease-out The ease-out function starts quickly and gradually slows down towards the end: animation-timing-function: ease-out; ease-in-out The ease-in-out function starts slowly and it gets fast in the middle of the cycle. It then starts slowing down towards the end, its syntax is as follows: animation-timing-function:ease-in-out; linear The linear function has constant speed. No accelerations of any kind happen, its syntax is as follows: animation-timing-function: linear; Now, the easing functions are built on a curve named the Bézier curve and can be called using the cubic-bezier() function or the steps() function. cubic-bezier() The cubic-bezier() function allows us to create custom acceleration curves. Most use cases can benefit from the already defined easing functions we just mentioned (ease, ease-in, ease-out, ease-in-out and linear), but if you're feeling adventurous, cubic-bezier() is your best bet. Here's how a Bézier curve looks like: Parameters The cubic-bezier() function takes four parameters as follows: animation-timing-function: cubic-bezier(x1, y1, x2, y2); X and Y represent the x and y axes. The numbers 1 and 2 after each axis represent the control points. 1 represents the control point starting on the lower left, and 2 represent the control point on the upper right. Description Let's represent all five predefined easing functions with the cubic-bezier() function: ease: animation-timing-function: cubic-bezier(.25, .1, .25, 1); ease-in: animation-timing-function: cubic-bezier(.42, 0, 1, 1); ease-out: animation-timing-function: cubic-bezier(0, 0, .58, 1); ease-in-out: animation-timing-function: cubic-bezier(.42, 0, .58, 1); linear: animation-timing-function: cubic-bezier(0, 0, 1, 1); Not sure about you, but I prefer to use the predefined values. Now, we can start tweaking and testing each value to the decimal, save it, and wait for the live refresh to do its thing. However, that's too much time wasted testing if you ask me. The amazing Lea Verou created the best web app to work with Bézier curves. You can find it at cubic-bezier.com. This is by far the easiest way to work with Bézier curves. I highly recommend this tool. The Bézier curve image showed earlier was taken from the cubic-bezier.com website. Let's add animation-timing-function to our example: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: ease-out; } steps() The steps() timing function isn't very widely used, but knowing how it works is a must if you're into CSS animations. It looks like this: animation-timing-function: steps(6); This function is very helpful when we want our animation to take a defined number of steps. After adding a steps() function to our current example, it looks like this: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: steps(6); } This makes the box take six steps to fade from red to black and vice versa. Parameters There are two optional parameters that we can use with the steps() function: start and end. start: This will make the animation run at the beginning of each step. This will make the animation start right away. end: This will make the animation run at the end of each step. This is the default value if nothing is declared. This will make the animation have a short delay before it starts. Description After adding the parameters to the CSS code, it looks like this: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: steps(6, start); } Granted, in our example, is not very noticeable. However, you can see it more clear in this pen form Louis Lazarus when hovering over the boxes, at http://tiny.cc/steps-timing-function. Here's an image taken from Stephen Greig's article on Smashing Magazine, Understanding CSS Timing Functions, that explains start and end from the steps() function: Also, there are two predefined values for the steps() function: step-start and step-end. step-start: Is the same thing as steps(1, start). It means that every change happens at the beginning of each interval. step-end: Is the same thing as steps(1, end). It means that every change happens at the end of each interval. CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: step-end; } animation The animation CSS property is the shorthand for animation-name, animation-duration, animation-timing-function, animation-delay, animation-iteration-count, animation-direction, animation-fill-mode, and animation-play-state. It looks like this: animation: fadingColors 2s; Description For a simple animation to work, we need at least two properties: name and duration. If you feel overwhelmed by all these properties, relax. Let me break them down for you in simple bits. Using the animation longhand, the code would look like this: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; } Using the animation shorthand, which is the recommended syntax, the code would look like this: CSS: .element { width: 300px; height: 300px; animation: fadingColors 2s; } This will make a box go from its red background to black in 2 seconds, and then stop. Final CSS code Let's see how all the animation properties look in one final example showing both the longhand and shorthand styles. Longhand style .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: ease-out; } Shorthand style .element { width: 300px; height: 300px; animation: fadingColors 2s infinite alternate 3s none running ease-out; } The animation-duration property will always be considered first rather than animation-delay. All other properties can appear in any order within the declaration. You can find a demo in CodePen at http://tiny.cc/animation. Summary In this article we learned how to add animations in our web project, also we learned about different properties, in detail, that can be used to animate our web project along with their description. Resources for Article: Further resources on this subject: Using JavaScript with HTML[article] Welcome to JavaScript in the full stack[article] A Typical JavaScript Project[article]
Read more
  • 0
  • 0
  • 3239
Modal Close icon
Modal Close icon